Shot boundary detection and label propagation for spatio-temporal video segmentation
NASA Astrophysics Data System (ADS)
Piramanayagam, Sankaranaryanan; Saber, Eli; Cahill, Nathan D.; Messinger, David
2015-02-01
This paper proposes a two stage algorithm for streaming video segmentation. In the first stage, shot boundaries are detected within a window of frames by comparing dissimilarity between 2-D segmentations of each frame. In the second stage, the 2-D segments are propagated across the window of frames in both spatial and temporal direction. The window is moved across the video to find all shot transitions and obtain spatio-temporal segments simultaneously. As opposed to techniques that operate on entire video, the proposed approach consumes significantly less memory and enables segmentation of lengthy videos. We tested our segmentation based shot detection method on the TRECVID 2007 video dataset and compared it with block-based technique. Cut detection results on the TRECVID 2007 dataset indicate that our algorithm has comparable results to the best of the block-based methods. The streaming video segmentation routine also achieves promising results on a challenging video segmentation benchmark database.
A novel sub-shot segmentation method for user-generated video
NASA Astrophysics Data System (ADS)
Lei, Zhuo; Zhang, Qian; Zheng, Chi; Qiu, Guoping
2018-04-01
With the proliferation of the user-generated videos, temporal segmentation is becoming a challengeable problem. Traditional video temporal segmentation methods like shot detection are not able to work on unedited user-generated videos, since they often only contain one single long shot. We propose a novel temporal segmentation framework for user-generated video. It finds similar frames with a tree partitioning min-Hash technique, constructs sparse temporal constrained affinity sub-graphs, and finally divides the video into sub-shot-level segments with a dense-neighbor-based clustering method. Experimental results show that our approach outperforms all the other related works. Furthermore, it is indicated that the proposed approach is able to segment user-generated videos at an average human level.
Video-assisted segmentation of speech and audio track
NASA Astrophysics Data System (ADS)
Pandit, Medha; Yusoff, Yusseri; Kittler, Josef; Christmas, William J.; Chilton, E. H. S.
1999-08-01
Video database research is commonly concerned with the storage and retrieval of visual information invovling sequence segmentation, shot representation and video clip retrieval. In multimedia applications, video sequences are usually accompanied by a sound track. The sound track contains potential cues to aid shot segmentation such as different speakers, background music, singing and distinctive sounds. These different acoustic categories can be modeled to allow for an effective database retrieval. In this paper, we address the problem of automatic segmentation of audio track of multimedia material. This audio based segmentation can be combined with video scene shot detection in order to achieve partitioning of the multimedia material into semantically significant segments.
Blurry-frame detection and shot segmentation in colonoscopy videos
NASA Astrophysics Data System (ADS)
Oh, JungHwan; Hwang, Sae; Tavanapong, Wallapak; de Groen, Piet C.; Wong, Johnny
2003-12-01
Colonoscopy is an important screening procedure for colorectal cancer. During this procedure, the endoscopist visually inspects the colon. Human inspection, however, is not without error. We hypothesize that colonoscopy videos may contain additional valuable information missed by the endoscopist. Video segmentation is the first necessary step for the content-based video analysis and retrieval to provide efficient access to the important images and video segments from a large colonoscopy video database. Based on the unique characteristics of colonoscopy videos, we introduce a new scheme to detect and remove blurry frames, and segment the videos into shots based on the contents. Our experimental results show that the average precision and recall of the proposed scheme are over 90% for the detection of non-blurry images. The proposed method of blurry frame detection and shot segmentation is extensible to the videos captured from other endoscopic procedures such as upper gastrointestinal endoscopy, enteroscopy, cystoscopy, and laparoscopy.
Selecting salient frames for spatiotemporal video modeling and segmentation.
Song, Xiaomu; Fan, Guoliang
2007-12-01
We propose a new statistical generative model for spatiotemporal video segmentation. The objective is to partition a video sequence into homogeneous segments that can be used as "building blocks" for semantic video segmentation. The baseline framework is a Gaussian mixture model (GMM)-based video modeling approach that involves a six-dimensional spatiotemporal feature space. Specifically, we introduce the concept of frame saliency to quantify the relevancy of a video frame to the GMM-based spatiotemporal video modeling. This helps us use a small set of salient frames to facilitate the model training by reducing data redundancy and irrelevance. A modified expectation maximization algorithm is developed for simultaneous GMM training and frame saliency estimation, and the frames with the highest saliency values are extracted to refine the GMM estimation for video segmentation. Moreover, it is interesting to find that frame saliency can imply some object behaviors. This makes the proposed method also applicable to other frame-related video analysis tasks, such as key-frame extraction, video skimming, etc. Experiments on real videos demonstrate the effectiveness and efficiency of the proposed method.
Gamifying Video Object Segmentation.
Spampinato, Concetto; Palazzo, Simone; Giordano, Daniela
2017-10-01
Video object segmentation can be considered as one of the most challenging computer vision problems. Indeed, so far, no existing solution is able to effectively deal with the peculiarities of real-world videos, especially in cases of articulated motion and object occlusions; limitations that appear more evident when we compare the performance of automated methods with the human one. However, manually segmenting objects in videos is largely impractical as it requires a lot of time and concentration. To address this problem, in this paper we propose an interactive video object segmentation method, which exploits, on one hand, the capability of humans to identify correctly objects in visual scenes, and on the other hand, the collective human brainpower to solve challenging and large-scale tasks. In particular, our method relies on a game with a purpose to collect human inputs on object locations, followed by an accurate segmentation phase achieved by optimizing an energy function encoding spatial and temporal constraints between object regions as well as human-provided location priors. Performance analysis carried out on complex video benchmarks, and exploiting data provided by over 60 users, demonstrated that our method shows a better trade-off between annotation times and segmentation accuracy than interactive video annotation and automated video object segmentation approaches.
NASA Astrophysics Data System (ADS)
Hasan, Taufiq; Bořil, Hynek; Sangwan, Abhijeet; L Hansen, John H.
2013-12-01
The ability to detect and organize `hot spots' representing areas of excitement within video streams is a challenging research problem when techniques rely exclusively on video content. A generic method for sports video highlight selection is presented in this study which leverages both video/image structure as well as audio/speech properties. Processing begins where the video is partitioned into small segments and several multi-modal features are extracted from each segment. Excitability is computed based on the likelihood of the segmental features residing in certain regions of their joint probability density function space which are considered both exciting and rare. The proposed measure is used to rank order the partitioned segments to compress the overall video sequence and produce a contiguous set of highlights. Experiments are performed on baseball videos based on signal processing advancements for excitement assessment in the commentators' speech, audio energy, slow motion replay, scene cut density, and motion activity as features. Detailed analysis on correlation between user excitability and various speech production parameters is conducted and an effective scheme is designed to estimate the excitement level of commentator's speech from the sports videos. Subjective evaluation of excitability and ranking of video segments demonstrate a higher correlation with the proposed measure compared to well-established techniques indicating the effectiveness of the overall approach.
Segment scheduling method for reducing 360° video streaming latency
NASA Astrophysics Data System (ADS)
Gudumasu, Srinivas; Asbun, Eduardo; He, Yong; Ye, Yan
2017-09-01
360° video is an emerging new format in the media industry enabled by the growing availability of virtual reality devices. It provides the viewer a new sense of presence and immersion. Compared to conventional rectilinear video (2D or 3D), 360° video poses a new and difficult set of engineering challenges on video processing and delivery. Enabling comfortable and immersive user experience requires very high video quality and very low latency, while the large video file size poses a challenge to delivering 360° video in a quality manner at scale. Conventionally, 360° video represented in equirectangular or other projection formats can be encoded as a single standards-compliant bitstream using existing video codecs such as H.264/AVC or H.265/HEVC. Such method usually needs very high bandwidth to provide an immersive user experience. While at the client side, much of such high bandwidth and the computational power used to decode the video are wasted because the user only watches a small portion (i.e., viewport) of the entire picture. Viewport dependent 360°video processing and delivery approaches spend more bandwidth on the viewport than on non-viewports and are therefore able to reduce the overall transmission bandwidth. This paper proposes a dual buffer segment scheduling algorithm for viewport adaptive streaming methods to reduce latency when switching between high quality viewports in 360° video streaming. The approach decouples the scheduling of viewport segments and non-viewport segments to ensure the viewport segment requested matches the latest user head orientation. A base layer buffer stores all lower quality segments, and a viewport buffer stores high quality viewport segments corresponding to the most recent viewer's head orientation. The scheduling scheme determines viewport requesting time based on the buffer status and the head orientation. This paper also discusses how to deploy the proposed scheduling design for various viewport adaptive video streaming methods. The proposed dual buffer segment scheduling method is implemented in an end-to-end tile based 360° viewports adaptive video streaming platform, where the entire 360° video is divided into a number of tiles, and each tile is independently encoded into multiple quality level representations. The client requests different quality level representations of each tile based on the viewer's head orientation and the available bandwidth, and then composes all tiles together for rendering. The simulation results verify that the proposed dual buffer segment scheduling algorithm reduces the viewport switch latency, and utilizes available bandwidth more efficiently. As a result, a more consistent immersive 360° video viewing experience can be presented to the user.
Surgical gesture segmentation and recognition.
Tao, Lingling; Zappella, Luca; Hager, Gregory D; Vidal, René
2013-01-01
Automatic surgical gesture segmentation and recognition can provide useful feedback for surgical training in robotic surgery. Most prior work in this field relies on the robot's kinematic data. Although recent work [1,2] shows that the robot's video data can be equally effective for surgical gesture recognition, the segmentation of the video into gestures is assumed to be known. In this paper, we propose a framework for joint segmentation and recognition of surgical gestures from kinematic and video data. Unlike prior work that relies on either frame-level kinematic cues, or segment-level kinematic or video cues, our approach exploits both cues by using a combined Markov/semi-Markov conditional random field (MsM-CRF) model. Our experiments show that the proposed model improves over a Markov or semi-Markov CRF when using video data alone, gives results that are comparable to state-of-the-art methods on kinematic data alone, and improves over state-of-the-art methods when combining kinematic and video data.
NASA Astrophysics Data System (ADS)
Zhang, Chao; Zhang, Qian; Zheng, Chi; Qiu, Guoping
2018-04-01
Video foreground segmentation is one of the key problems in video processing. In this paper, we proposed a novel and fully unsupervised approach for foreground object co-localization and segmentation of unconstrained videos. We firstly compute both the actual edges and motion boundaries of the video frames, and then align them by their HOG feature maps. Then, by filling the occlusions generated by the aligned edges, we obtained more precise masks about the foreground object. Such motion-based masks could be derived as the motion-based likelihood. Moreover, the color-base likelihood is adopted for the segmentation process. Experimental Results show that our approach outperforms most of the State-of-the-art algorithms.
Automatic video segmentation and indexing
NASA Astrophysics Data System (ADS)
Chahir, Youssef; Chen, Liming
1999-08-01
Indexing is an important aspect of video database management. Video indexing involves the analysis of video sequences, which is a computationally intensive process. However, effective management of digital video requires robust indexing techniques. The main purpose of our proposed video segmentation is twofold. Firstly, we develop an algorithm that identifies camera shot boundary. The approach is based on the use of combination of color histograms and block-based technique. Next, each temporal segment is represented by a color reference frame which specifies the shot similarities and which is used in the constitution of scenes. Experimental results using a variety of videos selected in the corpus of the French Audiovisual National Institute are presented to demonstrate the effectiveness of performing shot detection, the content characterization of shots and the scene constitution.
Activity recognition using Video Event Segmentation with Text (VEST)
NASA Astrophysics Data System (ADS)
Holloway, Hillary; Jones, Eric K.; Kaluzniacki, Andrew; Blasch, Erik; Tierno, Jorge
2014-06-01
Multi-Intelligence (multi-INT) data includes video, text, and signals that require analysis by operators. Analysis methods include information fusion approaches such as filtering, correlation, and association. In this paper, we discuss the Video Event Segmentation with Text (VEST) method, which provides event boundaries of an activity to compile related message and video clips for future interest. VEST infers meaningful activities by clustering multiple streams of time-sequenced multi-INT intelligence data and derived fusion products. We discuss exemplar results that segment raw full-motion video (FMV) data by using extracted commentary message timestamps, FMV metadata, and user-defined queries.
Detection and tracking of gas plumes in LWIR hyperspectral video sequence data
NASA Astrophysics Data System (ADS)
Gerhart, Torin; Sunu, Justin; Lieu, Lauren; Merkurjev, Ekaterina; Chang, Jen-Mei; Gilles, Jérôme; Bertozzi, Andrea L.
2013-05-01
Automated detection of chemical plumes presents a segmentation challenge. The segmentation problem for gas plumes is difficult due to the diffusive nature of the cloud. The advantage of considering hyperspectral images in the gas plume detection problem over the conventional RGB imagery is the presence of non-visual data, allowing for a richer representation of information. In this paper we present an effective method of visualizing hyperspectral video sequences containing chemical plumes and investigate the effectiveness of segmentation techniques on these post-processed videos. Our approach uses a combination of dimension reduction and histogram equalization to prepare the hyperspectral videos for segmentation. First, Principal Components Analysis (PCA) is used to reduce the dimension of the entire video sequence. This is done by projecting each pixel onto the first few Principal Components resulting in a type of spectral filter. Next, a Midway method for histogram equalization is used. These methods redistribute the intensity values in order to reduce icker between frames. This properly prepares these high-dimensional video sequences for more traditional segmentation techniques. We compare the ability of various clustering techniques to properly segment the chemical plume. These include K-means, spectral clustering, and the Ginzburg-Landau functional.
NASA Technical Reports Server (NTRS)
2003-01-01
This video presents an overview of the first Tracking and Data Relay Satellite (TDRS-1) in the form of text, computer animations, footage, and an interview with its program manager. Launched by the Space Shuttle Challenger in 1983, TDRS-1 was the first of a network of satellites used for relaying data to and from scientific spacecraft. Most of this short video is silent, and consists of footage and animation of the deployment of TDRS-1, written and animated explanations of what TDRS satellites do, and samples of the astronomical and Earth science data they transmit. The program manager explains in the final segment of the video the improvement TDRS satellites brought to communication with manned space missions, including alleviation of blackout during reentry, and also the role TDRS-1 played in providing telemedicine for a breast cancer patient in Antarctica.
A new user-assisted segmentation and tracking technique for an object-based video editing system
NASA Astrophysics Data System (ADS)
Yu, Hong Y.; Hong, Sung-Hoon; Lee, Mike M.; Choi, Jae-Gark
2004-03-01
This paper presents a semi-automatic segmentation method which can be used to generate video object plane (VOP) for object based coding scheme and multimedia authoring environment. Semi-automatic segmentation can be considered as a user-assisted segmentation technique. A user can initially mark objects of interest around the object boundaries and then the user-guided and selected objects are continuously separated from the unselected areas through time evolution in the image sequences. The proposed segmentation method consists of two processing steps: partially manual intra-frame segmentation and fully automatic inter-frame segmentation. The intra-frame segmentation incorporates user-assistance to define the meaningful complete visual object of interest to be segmentation and decides precise object boundary. The inter-frame segmentation involves boundary and region tracking to obtain temporal coherence of moving object based on the object boundary information of previous frame. The proposed method shows stable efficient results that could be suitable for many digital video applications such as multimedia contents authoring, content based coding and indexing. Based on these results, we have developed objects based video editing system with several convenient editing functions.
Model-based video segmentation for vision-augmented interactive games
NASA Astrophysics Data System (ADS)
Liu, Lurng-Kuo
2000-04-01
This paper presents an architecture and algorithms for model based video object segmentation and its applications to vision augmented interactive game. We are especially interested in real time low cost vision based applications that can be implemented in software in a PC. We use different models for background and a player object. The object segmentation algorithm is performed in two different levels: pixel level and object level. At pixel level, the segmentation algorithm is formulated as a maximizing a posteriori probability (MAP) problem. The statistical likelihood of each pixel is calculated and used in the MAP problem. Object level segmentation is used to improve segmentation quality by utilizing the information about the spatial and temporal extent of the object. The concept of an active region, which is defined based on motion histogram and trajectory prediction, is introduced to indicate the possibility of a video object region for both background and foreground modeling. It also reduces the overall computation complexity. In contrast with other applications, the proposed video object segmentation system is able to create background and foreground models on the fly even without introductory background frames. Furthermore, we apply different rate of self-tuning on the scene model so that the system can adapt to the environment when there is a scene change. We applied the proposed video object segmentation algorithms to several prototype virtual interactive games. In our prototype vision augmented interactive games, a player can immerse himself/herself inside a game and can virtually interact with other animated characters in a real time manner without being constrained by helmets, gloves, special sensing devices, or background environment. The potential applications of the proposed algorithms including human computer gesture interface and object based video coding such as MPEG-4 video coding.
Video Segmentation Descriptors for Event Recognition
2014-12-08
Velastin, 3D Extended Histogram of Oriented Gradients (3DHOG) for Classification of Road Users in Urban Scenes , BMVC, 2009. [3] M.-Y. Chen and A. Hauptmann...computed on 3D volume outputted by the hierarchical segmentation . Each video is described as follows. Each supertube is temporally divided in n-frame...strength of these descriptors is their adaptability to the scene variations since they are grounded on a video segmentation . This makes them naturally robust
Global-constrained hidden Markov model applied on wireless capsule endoscopy video segmentation
NASA Astrophysics Data System (ADS)
Wan, Yiwen; Duraisamy, Prakash; Alam, Mohammad S.; Buckles, Bill
2012-06-01
Accurate analysis of wireless capsule endoscopy (WCE) videos is vital but tedious. Automatic image analysis can expedite this task. Video segmentation of WCE into the four parts of the gastrointestinal tract is one way to assist a physician. The segmentation approach described in this paper integrates pattern recognition with statiscal analysis. Iniatially, a support vector machine is applied to classify video frames into four classes using a combination of multiple color and texture features as the feature vector. A Poisson cumulative distribution, for which the parameter depends on the length of segments, models a prior knowledge. A priori knowledge together with inter-frame difference serves as the global constraints driven by the underlying observation of each WCE video, which is fitted by Gaussian distribution to constrain the transition probability of hidden Markov model.Experimental results demonstrated effectiveness of the approach.
Fast Appearance Modeling for Automatic Primary Video Object Segmentation.
Yang, Jiong; Price, Brian; Shen, Xiaohui; Lin, Zhe; Yuan, Junsong
2016-02-01
Automatic segmentation of the primary object in a video clip is a challenging problem as there is no prior knowledge of the primary object. Most existing techniques thus adapt an iterative approach for foreground and background appearance modeling, i.e., fix the appearance model while optimizing the segmentation and fix the segmentation while optimizing the appearance model. However, these approaches may rely on good initialization and can be easily trapped in local optimal. In addition, they are usually time consuming for analyzing videos. To address these limitations, we propose a novel and efficient appearance modeling technique for automatic primary video object segmentation in the Markov random field (MRF) framework. It embeds the appearance constraint as auxiliary nodes and edges in the MRF structure, and can optimize both the segmentation and appearance model parameters simultaneously in one graph cut. The extensive experimental evaluations validate the superiority of the proposed approach over the state-of-the-art methods, in both efficiency and effectiveness.
Special-effect edit detection using VideoTrails: a comparison with existing techniques
NASA Astrophysics Data System (ADS)
Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.
1998-12-01
Video segmentation plays an integral role in many multimedia applications, such as digital libraries, content management systems, and various other video browsing, indexing, and retrieval systems. Many algorithms for segmentation of video have appeared within the past few years. Most of these algorithms perform well on cuts, but yield poor performance on gradual transitions or special effects edits. A complete video segmentation system must also achieve good performance on special effect edit detection. In this paper, we discuss the performance of our Video Trails-based algorithms, with other existing special effect edit-detection algorithms within the literature. Results from experiments testing for the ability to detect edits from TV programs, ranging from commercials to news magazine programs, including diverse special effect edits, which we have introduced.
User-assisted video segmentation system for visual communication
NASA Astrophysics Data System (ADS)
Wu, Zhengping; Chen, Chun
2002-01-01
Video segmentation plays an important role for efficient storage and transmission in visual communication. In this paper, we introduce a novel video segmentation system using point tracking and contour formation techniques. Inspired by the results from the study of the human visual system, we intend to solve the video segmentation problem into three separate phases: user-assisted feature points selection, feature points' automatic tracking, and contour formation. This splitting relieves the computer of ill-posed automatic segmentation problems, and allows a higher level of flexibility of the method. First, the precise feature points can be found using a combination of user assistance and an eigenvalue-based adjustment. Second, the feature points in the remaining frames are obtained using motion estimation and point refinement. At last, contour formation is used to extract the object, and plus a point insertion process to provide the feature points for next frame's tracking.
Effects of Segmenting, Signalling, and Weeding on Learning from Educational Video
ERIC Educational Resources Information Center
Ibrahim, Mohamed; Antonenko, Pavlo D.; Greenwood, Carmen M.; Wheeler, Denna
2012-01-01
Informed by the cognitive theory of multimedia learning, this study examined the effects of three multimedia design principles on undergraduate students' learning outcomes and perceived learning difficulty in the context of learning entomology from an educational video. These principles included segmenting the video into smaller units, signalling…
Baca, A
1996-04-01
A method has been developed for the precise determination of anthropometric dimensions from the video images of four different body configurations. High precision is achieved by incorporating techniques for finding the location of object boundaries with sub-pixel accuracy, the implementation of calibration algorithms, and by taking into account the varying distances of the body segments from the recording camera. The system allows automatic segment boundary identification from the video image, if the boundaries are marked on the subject by black ribbons. In connection with the mathematical finite-mass-element segment model of Hatze, body segment parameters (volumes, masses, the three principal moments of inertia, the three local coordinates of the segmental mass centers etc.) can be computed by using the anthropometric data determined videometrically as input data. Compared to other, recently published video-based systems for the estimation of the inertial properties of body segments, the present algorithms reduce errors originating from optical distortions, inaccurate edge-detection procedures, and user-specified upper and lower segment boundaries or threshold levels for the edge-detection. The video-based estimation of human body segment parameters is especially useful in situations where ease of application and rapid availability of comparatively precise parameter values are of importance.
Crowdsourcing for identification of polyp-free segments in virtual colonoscopy videos
NASA Astrophysics Data System (ADS)
Park, Ji Hwan; Mirhosseini, Seyedkoosha; Nadeem, Saad; Marino, Joseph; Kaufman, Arie; Baker, Kevin; Barish, Matthew
2017-03-01
Virtual colonoscopy (VC) allows a physician to virtually navigate within a reconstructed 3D colon model searching for colorectal polyps. Though VC is widely recognized as a highly sensitive and specific test for identifying polyps, one limitation is the reading time, which can take over 30 minutes per patient. Large amounts of the colon are often devoid of polyps, and a way of identifying these polyp-free segments could be of valuable use in reducing the required reading time for the interrogating radiologist. To this end, we have tested the ability of the collective crowd intelligence of non-expert workers to identify polyp candidates and polyp-free regions. We presented twenty short videos flying through a segment of a virtual colon to each worker, and the crowd was asked to determine whether or not a possible polyp was observed within that video segment. We evaluated our framework on Amazon Mechanical Turk and found that the crowd was able to achieve a sensitivity of 80.0% and specificity of 86.5% in identifying video segments which contained a clinically proven polyp. Since each polyp appeared in multiple consecutive segments, all polyps were in fact identified. Using the crowd results as a first pass, 80% of the video segments could in theory be skipped by the radiologist, equating to a significant time savings and enabling more VC examinations to be performed.
NASA Astrophysics Data System (ADS)
Li, Wei; Chen, Ting; Zhang, Wenjun; Shi, Yunyu; Li, Jun
2012-04-01
In recent years, Music video data is increasing at an astonishing speed. Shot segmentation and keyframe extraction constitute a fundamental unit in organizing, indexing, retrieving video content. In this paper a unified framework is proposed to detect the shot boundaries and extract the keyframe of a shot. Music video is first segmented to shots by illumination-invariant chromaticity histogram in independent component (IC) analysis feature space .Then we presents a new metric, image complexity, to extract keyframe in a shot which is computed by ICs. Experimental results show the framework is effective and has a good performance.
Video content parsing based on combined audio and visual information
NASA Astrophysics Data System (ADS)
Zhang, Tong; Kuo, C.-C. Jay
1999-08-01
While previous research on audiovisual data segmentation and indexing primarily focuses on the pictorial part, significant clues contained in the accompanying audio flow are often ignored. A fully functional system for video content parsing can be achieved more successfully through a proper combination of audio and visual information. By investigating the data structure of different video types, we present tools for both audio and visual content analysis and a scheme for video segmentation and annotation in this research. In the proposed system, video data are segmented into audio scenes and visual shots by detecting abrupt changes in audio and visual features, respectively. Then, the audio scene is categorized and indexed as one of the basic audio types while a visual shot is presented by keyframes and associate image features. An index table is then generated automatically for each video clip based on the integration of outputs from audio and visual analysis. It is shown that the proposed system provides satisfying video indexing results.
Real-time image sequence segmentation using curve evolution
NASA Astrophysics Data System (ADS)
Zhang, Jun; Liu, Weisong
2001-04-01
In this paper, we describe a novel approach to image sequence segmentation and its real-time implementation. This approach uses the 3D structure tensor to produce a more robust frame difference signal and uses curve evolution to extract whole objects. Our algorithm is implemented on a standard PC running the Windows operating system with video capture from a USB camera that is a standard Windows video capture device. Using the Windows standard video I/O functionalities, our segmentation software is highly portable and easy to maintain and upgrade. In its current implementation on a Pentium 400, the system can perform segmentation at 5 frames/sec with a frame resolution of 160 by 120.
Video-based noncooperative iris image segmentation.
Du, Yingzi; Arslanturk, Emrah; Zhou, Zhi; Belcher, Craig
2011-02-01
In this paper, we propose a video-based noncooperative iris image segmentation scheme that incorporates a quality filter to quickly eliminate images without an eye, employs a coarse-to-fine segmentation scheme to improve the overall efficiency, uses a direct least squares fitting of ellipses method to model the deformed pupil and limbic boundaries, and develops a window gradient-based method to remove noise in the iris region. A remote iris acquisition system is set up to collect noncooperative iris video images. An objective method is used to quantitatively evaluate the accuracy of the segmentation results. The experimental results demonstrate the effectiveness of this method. The proposed method would make noncooperative iris recognition or iris surveillance possible.
Automatic generation of pictorial transcripts of video programs
NASA Astrophysics Data System (ADS)
Shahraray, Behzad; Gibbon, David C.
1995-03-01
An automatic authoring system for the generation of pictorial transcripts of video programs which are accompanied by closed caption information is presented. A number of key frames, each of which represents the visual information in a segment of the video (i.e., a scene), are selected automatically by performing a content-based sampling of the video program. The textual information is recovered from the closed caption signal and is initially segmented based on its implied temporal relationship with the video segments. The text segmentation boundaries are then adjusted, based on lexical analysis and/or caption control information, to account for synchronization errors due to possible delays in the detection of scene boundaries or the transmission of the caption information. The closed caption text is further refined through linguistic processing for conversion to lower- case with correct capitalization. The key frames and the related text generate a compact multimedia presentation of the contents of the video program which lends itself to efficient storage and transmission. This compact representation can be viewed on a computer screen, or used to generate the input to a commercial text processing package to generate a printed version of the program.
Automated detection of videotaped neonatal seizures of epileptic origin.
Karayiannis, Nicolaos B; Xiong, Yaohua; Tao, Guozhi; Frost, James D; Wise, Merrill S; Hrachovy, Richard A; Mizrahi, Eli M
2006-06-01
This study aimed at the development of a seizure-detection system by training neural networks with quantitative motion information extracted from short video segments of neonatal seizures of the myoclonic and focal clonic types and random infant movements. The motion of the infants' body parts was quantified by temporal motion-strength signals extracted from video segments by motion-segmentation methods based on optical flow computation. The area of each frame occupied by the infants' moving body parts was segmented by clustering the motion parameters obtained by fitting an affine model to the pixel velocities. The motion of the infants' body parts also was quantified by temporal motion-trajectory signals extracted from video recordings by robust motion trackers based on block-motion models. These motion trackers were developed to adjust autonomously to illumination and contrast changes that may occur during the video-frame sequence. Video segments were represented by quantitative features obtained by analyzing motion-strength and motion-trajectory signals in both the time and frequency domains. Seizure recognition was performed by conventional feed-forward neural networks, quantum neural networks, and cosine radial basis function neural networks, which were trained to detect neonatal seizures of the myoclonic and focal clonic types and to distinguish them from random infant movements. The computational tools and procedures developed for automated seizure detection were evaluated on a set of 240 video segments of 54 patients exhibiting myoclonic seizures (80 segments), focal clonic seizures (80 segments), and random infant movements (80 segments). Regardless of the decision scheme used for interpreting the responses of the trained neural networks, all the neural network models exhibited sensitivity and specificity>90%. For one of the decision schemes proposed for interpreting the responses of the trained neural networks, the majority of the trained neural-network models exhibited sensitivity>90% and specificity>95%. In particular, cosine radial basis function neural networks achieved the performance targets of this phase of the project (i.e., sensitivity>95% and specificity>95%). The best among the motion segmentation and tracking methods developed in this study produced quantitative features that constitute a reliable basis for detecting neonatal seizures. The performance targets of this phase of the project were achieved by combining the quantitative features obtained by analyzing motion-strength signals with those produced by analyzing motion-trajectory signals. The computational procedures and tools developed in this study to perform off-line analysis of short video segments will be used in the next phase of this project, which involves the integration of these procedures and tools into a system that can process and analyze long video recordings of infants monitored for seizures in real time.
Video Salient Object Detection via Fully Convolutional Networks.
Wang, Wenguan; Shen, Jianbing; Shao, Ling
This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).
Deep residual networks for automatic segmentation of laparoscopic videos of the liver
NASA Astrophysics Data System (ADS)
Gibson, Eli; Robu, Maria R.; Thompson, Stephen; Edwards, P. Eddie; Schneider, Crispin; Gurusamy, Kurinchi; Davidson, Brian; Hawkes, David J.; Barratt, Dean C.; Clarkson, Matthew J.
2017-03-01
Motivation: For primary and metastatic liver cancer patients undergoing liver resection, a laparoscopic approach can reduce recovery times and morbidity while offering equivalent curative results; however, only about 10% of tumours reside in anatomical locations that are currently accessible for laparoscopic resection. Augmenting laparoscopic video with registered vascular anatomical models from pre-procedure imaging could support using laparoscopy in a wider population. Segmentation of liver tissue on laparoscopic video supports the robust registration of anatomical liver models by filtering out false anatomical correspondences between pre-procedure and intra-procedure images. In this paper, we present a convolutional neural network (CNN) approach to liver segmentation in laparoscopic liver procedure videos. Method: We defined a CNN architecture comprising fully-convolutional deep residual networks with multi-resolution loss functions. The CNN was trained in a leave-one-patient-out cross-validation on 2050 video frames from 6 liver resections and 7 laparoscopic staging procedures, and evaluated using the Dice score. Results: The CNN yielded segmentations with Dice scores >=0.95 for the majority of images; however, the inter-patient variability in median Dice score was substantial. Four failure modes were identified from low scoring segmentations: minimal visible liver tissue, inter-patient variability in liver appearance, automatic exposure correction, and pathological liver tissue that mimics non-liver tissue appearance. Conclusion: CNNs offer a feasible approach for accurately segmenting liver from other anatomy on laparoscopic video, but additional data or computational advances are necessary to address challenges due to the high inter-patient variability in liver appearance.
Smoke regions extraction based on two steps segmentation and motion detection in early fire
NASA Astrophysics Data System (ADS)
Jian, Wenlin; Wu, Kaizhi; Yu, Zirong; Chen, Lijuan
2018-03-01
Aiming at the early problems of video-based smoke detection in fire video, this paper proposes a method to extract smoke suspected regions by combining two steps segmentation and motion characteristics. Early smoldering smoke can be seen as gray or gray-white regions. In the first stage, regions of interests (ROIs) with smoke are obtained by using two step segmentation methods. Then, suspected smoke regions are detected by combining the two step segmentation and motion detection. Finally, morphological processing is used for smoke regions extracting. The Otsu algorithm is used as segmentation method and the ViBe algorithm is used to detect the motion of smoke. The proposed method was tested on 6 test videos with smoke. The experimental results show the effectiveness of our proposed method over visual observation.
News video story segmentation method using fusion of audio-visual features
NASA Astrophysics Data System (ADS)
Wen, Jun; Wu, Ling-da; Zeng, Pu; Luan, Xi-dao; Xie, Yu-xiang
2007-11-01
News story segmentation is an important aspect for news video analysis. This paper presents a method for news video story segmentation. Different form prior works, which base on visual features transform, the proposed technique uses audio features as baseline and fuses visual features with it to refine the results. At first, it selects silence clips as audio features candidate points, and selects shot boundaries and anchor shots as two kinds of visual features candidate points. Then this paper selects audio feature candidates as cues and develops different fusion method, which effectively using diverse type visual candidates to refine audio candidates, to get story boundaries. Experiment results show that this method has high efficiency and adaptability to different kinds of news video.
Science documentary video slides to enhance education and communication
NASA Astrophysics Data System (ADS)
Byrne, J. M.; Little, L. J.; Dodgson, K.
2010-12-01
Documentary production can convey powerful messages using a combination of authentic science and reinforcing video imagery. Conventional documentary production contains too much information for many viewers to follow; hence many powerful points may be lost. But documentary productions that are re-edited into short video sequences and made available through web based video servers allow the teacher/viewer to access the material as video slides. Each video slide contains one critical discussion segment of the larger documentary. A teacher/viewer can review the documentary one segment at a time in a class room, public forum, or in the comfort of home. The sequential presentation of the video slides allows the viewer to best absorb the documentary message. The website environment provides space for additional questions and discussion to enhance the video message.
NASA Technical Reports Server (NTRS)
Smith, Michael A.; Kanade, Takeo
1997-01-01
Digital video is rapidly becoming important for education, entertainment, and a host of multimedia applications. With the size of the video collections growing to thousands of hours, technology is needed to effectively browse segments in a short time without losing the content of the video. We propose a method to extract the significant audio and video information and create a "skim" video which represents a very short synopsis of the original. The goal of this work is to show the utility of integrating language and image understanding techniques for video skimming by extraction of significant information, such as specific objects, audio keywords and relevant video structure. The resulting skim video is much shorter, where compaction is as high as 20:1, and yet retains the essential content of the original segment.
Automated detection of videotaped neonatal seizures based on motion segmentation methods.
Karayiannis, Nicolaos B; Tao, Guozhi; Frost, James D; Wise, Merrill S; Hrachovy, Richard A; Mizrahi, Eli M
2006-07-01
This study was aimed at the development of a seizure detection system by training neural networks using quantitative motion information extracted by motion segmentation methods from short video recordings of infants monitored for seizures. The motion of the infants' body parts was quantified by temporal motion strength signals extracted from video recordings by motion segmentation methods based on optical flow computation. The area of each frame occupied by the infants' moving body parts was segmented by direct thresholding, by clustering of the pixel velocities, and by clustering the motion parameters obtained by fitting an affine model to the pixel velocities. The computational tools and procedures developed for automated seizure detection were tested and evaluated on 240 short video segments selected and labeled by physicians from a set of video recordings of 54 patients exhibiting myoclonic seizures (80 segments), focal clonic seizures (80 segments), and random infant movements (80 segments). The experimental study described in this paper provided the basis for selecting the most effective strategy for training neural networks to detect neonatal seizures as well as the decision scheme used for interpreting the responses of the trained neural networks. Depending on the decision scheme used for interpreting the responses of the trained neural networks, the best neural networks exhibited sensitivity above 90% or specificity above 90%. The best among the motion segmentation methods developed in this study produced quantitative features that constitute a reliable basis for detecting myoclonic and focal clonic neonatal seizures. The performance targets of this phase of the project may be achieved by combining the quantitative features described in this paper with those obtained by analyzing motion trajectory signals produced by motion tracking methods. A video system based upon automated analysis potentially offers a number of advantages. Infants who are at risk for seizures could be monitored continuously using relatively inexpensive and non-invasive video techniques that supplement direct observation by nursery personnel. This would represent a major advance in seizure surveillance and offers the possibility for earlier identification of potential neurological problems and subsequent intervention.
Common and Innovative Visuals: A sparsity modeling framework for video.
Abdolhosseini Moghadam, Abdolreza; Kumar, Mrityunjay; Radha, Hayder
2014-05-02
Efficient video representation models are critical for many video analysis and processing tasks. In this paper, we present a framework based on the concept of finding the sparsest solution to model video frames. To model the spatio-temporal information, frames from one scene are decomposed into two components: (i) a common frame, which describes the visual information common to all the frames in the scene/segment, and (ii) a set of innovative frames, which depicts the dynamic behaviour of the scene. The proposed approach exploits and builds on recent results in the field of compressed sensing to jointly estimate the common frame and the innovative frames for each video segment. We refer to the proposed modeling framework by CIV (Common and Innovative Visuals). We show how the proposed model can be utilized to find scene change boundaries and extend CIV to videos from multiple scenes. Furthermore, the proposed model is robust to noise and can be used for various video processing applications without relying on motion estimation and detection or image segmentation. Results for object tracking, video editing (object removal, inpainting) and scene change detection are presented to demonstrate the efficiency and the performance of the proposed model.
Video segmentation and camera motion characterization using compressed data
NASA Astrophysics Data System (ADS)
Milanese, Ruggero; Deguillaume, Frederic; Jacot-Descombes, Alain
1997-10-01
We address the problem of automatically extracting visual indexes from videos, in order to provide sophisticated access methods to the contents of a video server. We focus on tow tasks, namely the decomposition of a video clip into uniform segments, and the characterization of each shot by camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analyzing motion vectors. For the second task a least- squares fitting procedure determines the pan/tilt/zoom camera parameters. In order to guarantee the highest processing speed, all techniques process and analyze directly MPEG-1 motion vectors, without need for video decompression. Experimental results are reported for a database of news video clips.
Temporally coherent 4D video segmentation for teleconferencing
NASA Astrophysics Data System (ADS)
Ehmann, Jana; Guleryuz, Onur G.
2013-09-01
We develop an algorithm for 4-D (RGB+Depth) video segmentation targeting immersive teleconferencing ap- plications on emerging mobile devices. Our algorithm extracts users from their environments and places them onto virtual backgrounds similar to green-screening. The virtual backgrounds increase immersion and interac- tivity, relieving the users of the system from distractions caused by disparate environments. Commodity depth sensors, while providing useful information for segmentation, result in noisy depth maps with a large number of missing depth values. By combining depth and RGB information, our work signi¯cantly improves the other- wise very coarse segmentation. Further imposing temporal coherence yields compositions where the foregrounds seamlessly blend with the virtual backgrounds with minimal °icker and other artifacts. We achieve said improve- ments by correcting the missing information in depth maps before fast RGB-based segmentation, which operates in conjunction with temporal coherence. Simulation results indicate the e±cacy of the proposed system in video conferencing scenarios.
Causal Video Object Segmentation From Persistence of Occlusions
2015-05-01
Precision, recall, and F-measure are reported on the ground truth anno - tations converted to binary masks. Note we cannot evaluate “number of...to lack of occlusions. References [1] P. Arbelaez, M. Maire, C. Fowlkes, and J . Malik. Con- tour detection and hierarchical image segmentation. TPAMI...X. Bai, J . Wang, D. Simons, and G. Sapiro. Video snapcut: robust video object cutout using localized classifiers. In ACM Transactions on Graphics
2013-10-03
fol- low the setup in the literature ([13, 14]), and use 5 (birdfall, cheetah , girl, monkeydog and parachute) of the videos for evaluation (since the...segmentation labeling results of the method, GT is the ground-truth labeling of the video, and F is the (a) Birdfall (b) Cheetah (c) Girl (d) Monkeydog...Video Ours [14] [13] [20] [6] birdfall 155 189 288 252 454 cheetah 633 806 905 1142 1217 girl 1488 1698 1785 1304 1755 monkeydog 365 472 521 563 683
Multilevel wireless capsule endoscopy video segmentation
NASA Astrophysics Data System (ADS)
Hwang, Sae; Celebi, M. Emre
2010-03-01
Wireless Capsule Endoscopy (WCE) is a relatively new technology (FDA approved in 2002) allowing doctors to view most of the small intestine. WCE transmits more than 50,000 video frames per examination and the visual inspection of the resulting video is a highly time-consuming task even for the experienced gastroenterologist. Typically, a medical clinician spends one or two hours to analyze a WCE video. To reduce the assessment time, it is critical to develop a technique to automatically discriminate digestive organs and shots each of which consists of the same or similar shots. In this paper a multi-level WCE video segmentation methodology is presented to reduce the examination time.
Cooperative Educational Project - The Southern Appalachians: A Changing World
NASA Astrophysics Data System (ADS)
Clark, S.; Back, J.; Tubiolo, A.; Romanaux, E.
2001-12-01
The Southern Appalachian Mountains, a popular recreation area known for its beauty and rich biodiversity, was chosen by the U.S. Geological Survey as the site to produce a video, booklet, and teachers guide to explain basic geologic principles and how long-term geologic processes affect landscapes, ecosystems, and the quality of human life. The video was produced in cooperation with the National Park Service and has benefited from the advice of the Southern Appalachian Man and Biosphere Cooperative, a group of 11 Federal and three State agencies that works to promote the environmental health, stewardship, and sustainable development of the resources of the region. Much of the information in the video is included in the booklet. A teachers guide provides supporting activities that teachers may use to reinforce the concepts presented in the video and booklet. Although the Southern Appalachians include some of the most visited recreation areas in the country, few are aware of the geologic underpinnings that have contributed to the beauty, biological diversity, and quality of human life in the region. The video includes several animated segments that show paleogeographic reconstructions of the Earth and movements of the North American continent over time; the formation of the Ocoee sedimentary basin beginning about 750 million years ago; the collision of the North American and African continents about 270 million years ago; the formation of granites and similar rocks, faults, and geologic windows; and the extent of glaciation in North America. The animated segments are tied to familiar public-access localities in the region. They illustrate geologic processes and time periods, making the geologic setting of the region more understandable to tourists and local students. The video reinforces the concept that understanding geologic processes and settings is an important component of informed land management to sustain the quality of life in a region. The video and a teachers guide will be distributed by the Southern Appalachian Man and Biosphere to local middle and high schools, libraries, and visitors centers in the region. It will be distributed by the U.S. Geological Survey and sold in Park Service and Forest Service gift shops in the region.
Rivera, Reynaldo; Santos, David; Brändle, Gaspar; Cárdaba, Miguel Ángel M
2016-04-01
Exposure to media violence might have detrimental effects on psychological adjustment and is associated with aggression-related attitudes and behaviors. As a result, many media literacy programs were implemented to tackle that major public health issue. However, there is little evidence about their effectiveness. Evaluating design effectiveness, particularly regarding targeting process, would prevent adverse effects and improve the evaluation of evidence-based media literacy programs. The present research examined whether or not different relational lifestyles may explain the different effects of an antiviolence intervention program. Based on relational and lifestyles theory, the authors designed a randomized controlled trial and applied an analysis of variance 2 (treatment: experimental vs. control) × 4 (lifestyle classes emerged from data using latent class analysis: communicative vs. autonomous vs. meta-reflexive vs. fractured). Seven hundred and thirty-five Italian students distributed in 47 classes participated anonymously in the research (51.3% females). Participants completed a lifestyle questionnaire as well as their attitudes and behavioral intentions as the dependent measures. The results indicated that the program was effective in changing adolescents' attitudes toward violence. However, behavioral intentions toward consumption of violent video games were moderated by lifestyles. Those with communicative relational lifestyles showed fewer intentions to consume violent video games, while a boomerang effect was found among participants with problematic lifestyles. Adolescents' lifestyles played an important role in influencing the effectiveness of an intervention aimed at changing behavioral intentions toward the consumption of violent video games. For that reason, audience lifestyle segmentation analysis should be considered an essential technique for designing, evaluating, and improving media literacy programs. © The Author(s) 2016.
WCE video segmentation using textons
NASA Astrophysics Data System (ADS)
Gallo, Giovanni; Granata, Eliana
2010-03-01
Wireless Capsule Endoscopy (WCE) integrates wireless transmission with image and video technology. It has been used to examine the small intestine non invasively. Medical specialists look for signicative events in the WCE video by direct visual inspection manually labelling, in tiring and up to one hour long sessions, clinical relevant frames. This limits the WCE usage. To automatically discriminate digestive organs such as esophagus, stomach, small intestine and colon is of great advantage. In this paper we propose to use textons for the automatic discrimination of abrupt changes within a video. In particular, we consider, as features, for each frame hue, saturation, value, high-frequency energy content and the responses to a bank of Gabor filters. The experiments have been conducted on ten video segments extracted from WCE videos, in which the signicative events have been previously labelled by experts. Results have shown that the proposed method may eliminate up to 70% of the frames from further investigations. The direct analysis of the doctors may hence be concentrated only on eventful frames. A graphical tool showing sudden changes in the textons frequencies for each frame is also proposed as a visual aid to find clinically relevant segments of the video.
Improved segmentation of occluded and adjoining vehicles in traffic surveillance videos
NASA Astrophysics Data System (ADS)
Juneja, Medha; Grover, Priyanka
2013-12-01
Occlusion in image processing refers to concealment of any part of the object or the whole object from view of an observer. Real time videos captured by static cameras on roads often encounter overlapping and hence, occlusion of vehicles. Occlusion in traffic surveillance videos usually occurs when an object which is being tracked is hidden by another object. This makes it difficult for the object detection algorithms to distinguish all the vehicles efficiently. Also morphological operations tend to join the close proximity vehicles resulting in formation of a single bounding box around more than one vehicle. Such problems lead to errors in further video processing, like counting of vehicles in a video. The proposed system brings forward efficient moving object detection and tracking approach to reduce such errors. The paper uses successive frame subtraction technique for detection of moving objects. Further, this paper implements the watershed algorithm to segment the overlapped and adjoining vehicles. The segmentation results have been improved by the use of noise and morphological operations.
A content-based news video retrieval system: NVRS
NASA Astrophysics Data System (ADS)
Liu, Huayong; He, Tingting
2009-10-01
This paper focus on TV news programs and design a content-based news video browsing and retrieval system, NVRS, which is convenient for users to fast browsing and retrieving news video by different categories such as political, finance, amusement, etc. Combining audiovisual features and caption text information, the system automatically segments a complete news program into separate news stories. NVRS supports keyword-based news story retrieval, category-based news story browsing and generates key-frame-based video abstract for each story. Experiments show that the method of story segmentation is effective and the retrieval is also efficient.
Integrated approach to multimodal media content analysis
NASA Astrophysics Data System (ADS)
Zhang, Tong; Kuo, C.-C. Jay
1999-12-01
In this work, we present a system for the automatic segmentation, indexing and retrieval of audiovisual data based on the combination of audio, visual and textural content analysis. The video stream is demultiplexed into audio, image and caption components. Then, a semantic segmentation of the audio signal based on audio content analysis is conducted, and each segment is indexed as one of the basic audio types. The image sequence is segmented into shots based on visual information analysis, and keyframes are extracted from each shot. Meanwhile, keywords are detected from the closed caption. Index tables are designed for both linear and non-linear access to the video. It is shown by experiments that the proposed methods for multimodal media content analysis are effective. And that the integrated framework achieves satisfactory results for video information filtering and retrieval.
Computer aided diagnosis of diabetic peripheral neuropathy
NASA Astrophysics Data System (ADS)
Chekh, Viktor; Soliz, Peter; McGrew, Elizabeth; Barriga, Simon; Burge, Mark; Luan, Shuang
2014-03-01
Diabetic peripheral neuropathy (DPN) refers to the nerve damage that can occur in diabetes patients. It most often affects the extremities, such as the feet, and can lead to peripheral vascular disease, deformity, infection, ulceration, and even amputation. The key to managing diabetic foot is prevention and early detection. Unfortunately, current existing diagnostic techniques are mostly based on patient sensations and exhibit significant inter- and intra-observer differences. We have developed a computer aided diagnostic (CAD) system for diabetic peripheral neuropathy. The thermal response of the feet of diabetic patients following cold stimulus is captured using an infrared camera. The plantar foot in the images from a thermal video are segmented and registered for tracking points or specific regions. The temperature recovery of each point on the plantar foot is extracted using our bio-thermal model and analyzed. The regions that exhibit abnormal ability to recover are automatically identified to aid the physicians to recognize problematic areas. The key to our CAD system is the segmentation of infrared video. The main challenges for segmenting infrared video compared to normal digital video are (1) as the foot warms up, it also warms up the surrounding, creating an ever changing contrast; and (2) there may be significant motion during imaging. To overcome this, a hybrid segmentation algorithm was developed based on a number of techniques such as continuous max-flow, model based segmentation, shape preservation, convex hull, and temperature normalization. Verifications of the automatic segmentation and registration using manual segmentation and markers show good agreement.
Video segmentation using keywords
NASA Astrophysics Data System (ADS)
Ton-That, Vinh; Vong, Chi-Tai; Nguyen-Dao, Xuan-Truong; Tran, Minh-Triet
2018-04-01
At DAVIS-2016 Challenge, many state-of-art video segmentation methods achieve potential results, but they still much depend on annotated frames to distinguish between background and foreground. It takes a lot of time and efforts to create these frames exactly. In this paper, we introduce a method to segment objects from video based on keywords given by user. First, we use a real-time object detection system - YOLOv2 to identify regions containing objects that have labels match with the given keywords in the first frame. Then, for each region identified from the previous step, we use Pyramid Scene Parsing Network to assign each pixel as foreground or background. These frames can be used as input frames for Object Flow algorithm to perform segmentation on entire video. We conduct experiments on a subset of DAVIS-2016 dataset in half the size of its original size, which shows that our method can handle many popular classes in PASCAL VOC 2012 dataset with acceptable accuracy, about 75.03%. We suggest widely testing by combining other methods to improve this result in the future.
Exploring the explaining quality of physics online explanatory videos
NASA Astrophysics Data System (ADS)
Kulgemeyer, Christoph; Peters, Cord H.
2016-11-01
Explaining skills are among the most important skills educators possess. Those skills have also been researched in recent years. During the same period, another medium has additionally emerged and become a popular source of information for learners: online explanatory videos, chiefly from the online video sharing website YouTube. Their content and explaining quality remain to this day mostly unmonitored, as well is their educational impact in formal contexts such as schools or universities. In this study, a framework for explaining quality, which has emerged from surveying explaining skills in expert-novice face-to-face dialogues, was used to explore the explaining quality of such videos (36 YouTube explanatory videos on Kepler’s laws and 15 videos on Newton’s third law). The framework consists of 45 categories derived from physics education research that deal with explanation techniques. YouTube provides its own ‘quality measures’ based on surface features including ‘likes’, views, and comments for each video. The question is whether or not these measures provide valid information for educators and students if they have to decide which video to use. We compared the explaining quality with those measures. Our results suggest that there is a correlation between explaining quality and only one of these measures: the number of content-related comments.
Echocardiogram video summarization
NASA Astrophysics Data System (ADS)
Ebadollahi, Shahram; Chang, Shih-Fu; Wu, Henry D.; Takoma, Shin
2001-05-01
This work aims at developing innovative algorithms and tools for summarizing echocardiogram videos. Specifically, we summarize the digital echocardiogram videos by temporally segmenting them into the constituent views and representing each view by the most informative frame. For the segmentation we take advantage of the well-defined spatio- temporal structure of the echocardiogram videos. Two different criteria are used: presence/absence of color and the shape of the region of interest (ROI) in each frame of the video. The change in the ROI is due to different modes of echocardiograms present in one study. The representative frame is defined to be the frame corresponding to the end- diastole of the heart cycle. To locate the end-diastole we track the ECG of each frame to find the exact time the time- marker on the ECG crosses the peak of the end-diastole we track the ECG of each frame to find the exact time the time- marker on the ECG crosses the peak of the R-wave. The corresponding frame is chosen to be the key-frame. The entire echocardiogram video can be summarized into either a static summary, which is a storyboard type of summary and a dynamic summary, which is a concatenation of the selected segments of the echocardiogram video. To the best of our knowledge, this if the first automated system for summarizing the echocardiogram videos base don visual content.
The video watermarking container: efficient real-time transaction watermarking
NASA Astrophysics Data System (ADS)
Wolf, Patrick; Hauer, Enrico; Steinebach, Martin
2008-02-01
When transaction watermarking is used to secure sales in online shops by embedding transaction specific watermarks, the major challenge is embedding efficiency: Maximum speed by minimal workload. This is true for all types of media. Video transaction watermarking presents a double challenge. Video files not only are larger than for example music files of the same playback time. In addition, video watermarking algorithms have a higher complexity than algorithms for other types of media. Therefore online shops that want to protect their videos by transaction watermarking are faced with the problem that their servers need to work harder and longer for every sold medium in comparison to audio sales. In the past, many algorithms responded to this challenge by reducing their complexity. But this usually results in a loss of either robustness or transparency. This paper presents a different approach. The container technology separates watermark embedding into two stages: A preparation stage and the finalization stage. In the preparation stage, the video is divided into embedding segments. For each segment one copy marked with "0" and anther one marked with "1" is created. This stage is computationally expensive but only needs to be done once. In the finalization stage, the watermarked video is assembled from the embedding segments according to the watermark message. This stage is very fast and involves no complex computations. It thus allows efficient creation of individually watermarked video files.
Telesign: a videophone system for sign language distant communication
NASA Astrophysics Data System (ADS)
Mozelle, Gerard; Preteux, Francoise J.; Viallet, Jean-Emmanuel
1998-09-01
This paper presents a low bit rate videophone system for deaf people communicating by means of sign language. Classic video conferencing systems have focused on head and shoulders sequences which are not well-suited for sign language video transmission since hearing impaired people also use their hands and arms to communicate. To address the above-mentioned functionality, we have developed a two-step content-based video coding system based on: (1) A segmentation step. Four or five video objects (VO) are extracted using a cooperative approach between color-based and morphological segmentation. (2) VO coding are achieved by using a standardized MPEG-4 video toolbox. Results of encoded sign language video sequences, presented for three target bit rates (32 kbits/s, 48 kbits/s and 64 kbits/s), demonstrate the efficiency of the approach presented in this paper.
Tracking cells in Life Cell Imaging videos using topological alignments.
Mosig, Axel; Jäger, Stefan; Wang, Chaofeng; Nath, Sumit; Ersoy, Ilker; Palaniappan, Kannap-pan; Chen, Su-Shing
2009-07-16
With the increasing availability of live cell imaging technology, tracking cells and other moving objects in live cell videos has become a major challenge for bioimage informatics. An inherent problem for most cell tracking algorithms is over- or under-segmentation of cells - many algorithms tend to recognize one cell as several cells or vice versa. We propose to approach this problem through so-called topological alignments, which we apply to address the problem of linking segmentations of two consecutive frames in the video sequence. Starting from the output of a conventional segmentation procedure, we align pairs of consecutive frames through assigning sets of segments in one frame to sets of segments in the next frame. We achieve this through finding maximum weighted solutions to a generalized "bipartite matching" between two hierarchies of segments, where we derive weights from relative overlap scores of convex hulls of sets of segments. For solving the matching task, we rely on an integer linear program. Practical experiments demonstrate that the matching task can be solved efficiently in practice, and that our method is both effective and useful for tracking cells in data sets derived from a so-called Large Scale Digital Cell Analysis System (LSDCAS). The source code of the implementation is available for download from http://www.picb.ac.cn/patterns/Software/topaln.
An improvement analysis on video compression using file segmentation
NASA Astrophysics Data System (ADS)
Sharma, Shubhankar; Singh, K. John; Priya, M.
2017-11-01
From the past two decades the extreme evolution of the Internet has lead a massive rise in video technology and significantly video consumption over the Internet which inhabits the bulk of data traffic in general. Clearly, video consumes that so much data size on the World Wide Web, to reduce the burden on the Internet and deduction of bandwidth consume by video so that the user can easily access the video data.For this, many video codecs are developed such as HEVC/H.265 and V9. Although after seeing codec like this one gets a dilemma of which would be improved technology in the manner of rate distortion and the coding standard.This paper gives a solution about the difficulty for getting low delay in video compression and video application e.g. ad-hoc video conferencing/streaming or observation by surveillance. Also this paper describes the benchmark of HEVC and V9 technique of video compression on subjective oral estimations of High Definition video content, playback on web browsers. Moreover, this gives the experimental ideology of dividing the video file into several segments for compression and putting back together to improve the efficiency of video compression on the web as well as on the offline mode.
STS-114 Flight Day 10 Highlights
NASA Technical Reports Server (NTRS)
2005-01-01
On Flight Day 10 of the STS-114 mission the International Space Station (ISS) is seen in low lighting while the Space Station Remote Manipulator System (SSRMS), also known as Canadarm 2 grapples the Raffaello Multipurpose Logistics Module (MPLM) in preparation for its undocking the following day. Members of the shuttle crew (Commander Eileen Collins, Pilot James Kelly, Mission Specialists Soichi Noguchi, Stephen Robinson, Andrew Thomas, Wendy Lawrence, and Charles Camarda) and the Expedition 11 crew (Commander Sergei Krikalev and NASA ISS Science Officer and Flight Engineer John Phillips) of the ISS read statements in English and Russian in a ceremony for astronauts who gave their lives. Interview segments include one of Collins, Robinson, and Camarda, wearing red shirts to commemorate the STS-107 Columbia crew, and one of Collins and Noguchi on board the ISS, which features voice over from an interpreter translating questions from the Japanese prime minister. The video also features a segment showing gap fillers on board Discovery after being removed from underneath the orbiter, and another segment which explains an experimental plug for future shuttle repairs being tested onboard the mid deck.
Stochastic modeling of soundtrack for efficient segmentation and indexing of video
NASA Astrophysics Data System (ADS)
Naphade, Milind R.; Huang, Thomas S.
1999-12-01
Tools for efficient and intelligent management of digital content are essential for digital video data management. An extremely challenging research area in this context is that of multimedia analysis and understanding. The capabilities of audio analysis in particular for video data management are yet to be fully exploited. We present a novel scheme for indexing and segmentation of video by analyzing the audio track. This analysis is then applied to the segmentation and indexing of movies. We build models for some interesting events in the motion picture soundtrack. The models built include music, human speech and silence. We propose the use of hidden Markov models to model the dynamics of the soundtrack and detect audio-events. Using these models we segment and index the soundtrack. A practical problem in motion picture soundtracks is that the audio in the track is of a composite nature. This corresponds to the mixing of sounds from different sources. Speech in foreground and music in background are common examples. The coexistence of multiple individual audio sources forces us to model such events explicitly. Experiments reveal that explicit modeling gives better result than modeling individual audio events separately.
Video Modeling by Experts with Video Feedback to Enhance Gymnastics Skills
ERIC Educational Resources Information Center
Boyer, Eva; Miltenberger, Raymond G.; Batsche, Catherine; Fogel, Victoria
2009-01-01
The effects of combining video modeling by experts with video feedback were analyzed with 4 female competitive gymnasts (7 to 10 years old) in a multiple baseline design across behaviors. During the intervention, after the gymnast performed a specific gymnastics skill, she viewed a video segment showing an expert gymnast performing the same skill…
NASA Astrophysics Data System (ADS)
Hatze, Herbert; Baca, Arnold
1993-01-01
The development of noninvasive techniques for the determination of biomechanical body segment parameters (volumes, masses, the three principal moments of inertia, the three local coordinates of the segmental mass centers, etc.) receives increasing attention from the medical sciences (e,.g., orthopaedic gait analysis), bioengineering, sport biomechanics, and the various space programs. In the present paper, a novel method is presented for determining body segment parameters rapidly and accurately. It is based on the video-image processing of four different body configurations and a finite mass-element human body model. The four video images of the subject in question are recorded against a black background, thus permitting the application of shape recognition procedures incorporating edge detection and calibration algorithms. In this way, a total of 181 object space dimensions of the subject's body segments can be reconstructed and used as anthropometric input data for the mathematical finite mass- element body model. The latter comprises 17 segments (abdomino-thoracic, head-neck, shoulders, upper arms, forearms, hands, abdomino-pelvic, thighs, lower legs, feet) and enables the user to compute all the required segment parameters for each of the 17 segments by means of the associated computer program. The hardware requirements are an IBM- compatible PC (1 MB memory) operating under MS-DOS or PC-DOS (Version 3.1 onwards) and incorporating a VGA-board with a feature connector for connecting it to a super video windows framegrabber board for which there must be available a 16-bit large slot. In addition, a VGA-monitor (50 - 70 Hz, horizontal scan rate at least 31.5 kHz), a common video camera and recorder, and a simple rectangular calibration frame are required. The advantage of the new method lies in its ease of application, its comparatively high accuracy, and in the rapid availability of the body segment parameters, which is particularly useful in clinical practice. An example of its practical application illustrates the technique.
Creating and Using Video Segments for Rural Teacher Education.
ERIC Educational Resources Information Center
Ludlow, Barbara L.; Duff, Michael C.
This paper provides guidelines for using video presentations in teacher education programs in special education. The simplest use of video is to provide students with illustrations of basic concepts, demonstrations of specific skills, or examples of model programs and practices. Video can also deliver contextually rich case studies to stimulate…
Learning Outcomes Afforded by Self-Assessed, Segmented Video-Print Combinations
ERIC Educational Resources Information Center
Koumi, Jack
2015-01-01
Learning affordances of video and print are examined in order to assess the learning outcomes afforded by hybrid video-print learning packages. The affordances discussed for print are: navigability, surveyability and legibility. Those discussed for video are: design for constructive reflection, provision of realistic experiences, presentational…
Optimizing Educational Video through Comparative Trials in Clinical Environments
ERIC Educational Resources Information Center
Aronson, Ian David; Plass, Jan L.; Bania, Theodore C.
2012-01-01
Although video is increasingly used in public health education, studies generally do not implement randomized trials of multiple video segments in clinical environments. Therefore, the specific configurations of educational videos that will have the greatest impact on outcome measures ranging from increased knowledge of important public health…
Segmentation of Pollen Tube Growth Videos Using Dynamic Bi-Modal Fusion and Seam Carving.
Tambo, Asongu L; Bhanu, Bir
2016-05-01
The growth of pollen tubes is of significant interest in plant cell biology, as it provides an understanding of internal cell dynamics that affect observable structural characteristics such as cell diameter, length, and growth rate. However, these parameters can only be measured in experimental videos if the complete shape of the cell is known. The challenge is to accurately obtain the cell boundary in noisy video images. Usually, these measurements are performed by a scientist who manually draws regions-of-interest on the images displayed on a computer screen. In this paper, a new automated technique is presented for boundary detection by fusing fluorescence and brightfield images, and a new efficient method of obtaining the final cell boundary through the process of Seam Carving is proposed. This approach takes advantage of the nature of the fusion process and also the shape of the pollen tube to efficiently search for the optimal cell boundary. In video segmentation, the first two frames are used to initialize the segmentation process by creating a search space based on a parametric model of the cell shape. Updates to the search space are performed based on the location of past segmentations and a prediction of the next segmentation.Experimental results show comparable accuracy to a previous method, but significant decrease in processing time. This has the potential for real time applications in pollen tube microscopy.
Segmentation of the Speaker's Face Region with Audiovisual Correlation
NASA Astrophysics Data System (ADS)
Liu, Yuyu; Sato, Yoichi
The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.
NASA Astrophysics Data System (ADS)
1998-07-01
This is a composite tape showing 10 short segments primarily about asteroids. The segments have short introductory slides, which include brief descriptions about the shots. The segments are: (1) Radar movie of asteroid 1620 Geographos; (2) Animation of the trajectories of Toutatis and Earth (3) Animation of a landing on Toutatis; (4) Simulated encounter of an asteroid with Earth, includes a simulated impact trajectory; (5) An animated overview of the Manrover vehicle; (6) The Near Earth Asteroid Tracking project, includes a photograph of USAF Station in Hawaii, and animation of Earth approaching 4179 Toutatis and the asteroid Gaspara; (7) live video of the anchor tests of the Champoleon anchoring apparatus; (8) a second live video of the Champoleon anchor tests showing anchoring spikes, and collision rings; (9) An animated segment with narration about the Stardust mission with sound, which describes the mission to fly close to a comet, and capture cometary material for return to Earth; (10) live video of the drop test of a Stardust replica from a hot air balloon; this includes sound but is not narrated.
Hierarchical video summarization
NASA Astrophysics Data System (ADS)
Ratakonda, Krishna; Sezan, M. Ibrahim; Crinon, Regis J.
1998-12-01
We address the problem of key-frame summarization of vide in the absence of any a priori information about its content. This is a common problem that is encountered in home videos. We propose a hierarchical key-frame summarization algorithm where a coarse-to-fine key-frame summary is generated. A hierarchical key-frame summary facilitates multi-level browsing where the user can quickly discover the content of the video by accessing its coarsest but most compact summary and then view a desired segment of the video with increasingly more detail. At the finest level, the summary is generated on the basis of color features of video frames, using an extension of a recently proposed key-frame extraction algorithm. The finest level key-frames are recursively clustered using a novel pairwise K-means clustering approach with temporal consecutiveness constraint. We also address summarization of MPEG-2 compressed video without fully decoding the bitstream. We also propose efficient mechanisms that facilitate decoding the video when the hierarchical summary is utilized in browsing and playback of video segments starting at selected key-frames.
IBES: a tool for creating instructions based on event segmentation
Mura, Katharina; Petersen, Nils; Huff, Markus; Ghose, Tandra
2013-01-01
Receiving informative, well-structured, and well-designed instructions supports performance and memory in assembly tasks. We describe IBES, a tool with which users can quickly and easily create multimedia, step-by-step instructions by segmenting a video of a task into segments. In a validation study we demonstrate that the step-by-step structure of the visual instructions created by the tool corresponds to the natural event boundaries, which are assessed by event segmentation and are known to play an important role in memory processes. In one part of the study, 20 participants created instructions based on videos of two different scenarios by using the proposed tool. In the other part of the study, 10 and 12 participants respectively segmented videos of the same scenarios yielding event boundaries for coarse and fine events. We found that the visual steps chosen by the participants for creating the instruction manual had corresponding events in the event segmentation. The number of instructional steps was a compromise between the number of fine and coarse events. Our interpretation of results is that the tool picks up on natural human event perception processes of segmenting an ongoing activity into events and enables the convenient transfer into meaningful multimedia instructions for assembly tasks. We discuss the practical application of IBES, for example, creating manuals for differing expertise levels, and give suggestions for research on user-oriented instructional design based on this tool. PMID:24454296
IBES: a tool for creating instructions based on event segmentation.
Mura, Katharina; Petersen, Nils; Huff, Markus; Ghose, Tandra
2013-12-26
Receiving informative, well-structured, and well-designed instructions supports performance and memory in assembly tasks. We describe IBES, a tool with which users can quickly and easily create multimedia, step-by-step instructions by segmenting a video of a task into segments. In a validation study we demonstrate that the step-by-step structure of the visual instructions created by the tool corresponds to the natural event boundaries, which are assessed by event segmentation and are known to play an important role in memory processes. In one part of the study, 20 participants created instructions based on videos of two different scenarios by using the proposed tool. In the other part of the study, 10 and 12 participants respectively segmented videos of the same scenarios yielding event boundaries for coarse and fine events. We found that the visual steps chosen by the participants for creating the instruction manual had corresponding events in the event segmentation. The number of instructional steps was a compromise between the number of fine and coarse events. Our interpretation of results is that the tool picks up on natural human event perception processes of segmenting an ongoing activity into events and enables the convenient transfer into meaningful multimedia instructions for assembly tasks. We discuss the practical application of IBES, for example, creating manuals for differing expertise levels, and give suggestions for research on user-oriented instructional design based on this tool.
NASA Astrophysics Data System (ADS)
Ezhova, Kseniia; Fedorenko, Dmitriy; Chuhlamov, Anton
2016-04-01
The article deals with the methods of image segmentation based on color space conversion, and allow the most efficient way to carry out the detection of a single color in a complex background and lighting, as well as detection of objects on a homogeneous background. The results of the analysis of segmentation algorithms of this type, the possibility of their implementation for creating software. The implemented algorithm is very time-consuming counting, making it a limited application for the analysis of the video, however, it allows us to solve the problem of analysis of objects in the image if there is no dictionary of images and knowledge bases, as well as the problem of choosing the optimal parameters of the frame quantization for video analysis.
Li, Shuben; Chai, Huiping; Huang, Jun; Zeng, Guangqiao; Shao, Wenlong; He, Jianxing
2014-04-01
The purpose of the current study is to present the clinical and surgical results in patients who underwent hybrid video-assisted thoracic surgery with segmental-main bronchial sleeve resection. Thirty-one patients, 27 men and 4 women, underwent segmental-main bronchial sleeve anastomoses for non-small cell lung cancer between May 2004 and May 2011. Twenty-six (83.9%) patients had squamous cell carcinoma, and 5 patients had adenocarcinoma. Six patients were at stage IIB, 24 patients at stage IIIA, and 1 patient at stage IIIB. Secondary sleeve anastomosis was performed in 18 patients, and Y-shaped multiple sleeve anastomosis was performed in 8 patients. Single segmental bronchiole anastomosis was performed in 5 cases. The average time for chest tube removal was 5.6 days. The average length of hospital stay was 11.8 days. No anastomosis fistula developed in any of the patients. The 1-, 2-, and 3-year survival rates were 83.9%, 71.0%, and 41.9%, respectively. Hybrid video-assisted thoracic surgery with segmental-main bronchial sleeve resection is a complex technique that requires training and experience, but it is an effective and safe operation for selected patients.
Race and Emotion in Computer-Based HIV Prevention Videos for Emergency Department Patients
ERIC Educational Resources Information Center
Aronson, Ian David; Bania, Theodore C.
2011-01-01
Computer-based video provides a valuable tool for HIV prevention in hospital emergency departments. However, the type of video content and protocol that will be most effective remain underexplored and the subject of debate. This study employs a new and highly replicable methodology that enables comparisons of multiple video segments, each based on…
Adventure Racing and Organizational Behavior: Using Eco Challenge Video Clips to Stimulate Learning
ERIC Educational Resources Information Center
Kenworthy-U'Ren, Amy; Erickson, Anthony
2009-01-01
In this article, the Eco Challenge race video is presented as a teaching tool for facilitating theory-based discussion and application in organizational behavior (OB) courses. Before discussing the intricacies of the video series itself, the authors present a pedagogically based rationale for using reality TV-based video segments in a classroom…
A Conceptual Characterization of Online Videos Explaining Natural Selection
ERIC Educational Resources Information Center
Bohlin, Gustav; Göransson, Andreas; Höst, Gunnar E.; Tibell, Lena A. E.
2017-01-01
Educational videos on the Internet comprise a vast and highly diverse source of information. Online search engines facilitate access to numerous videos claiming to explain natural selection, but little is known about the degree to which the video content match key evolutionary content identified as important in evolution education research. In…
Joint Multi-Leaf Segmentation, Alignment, and Tracking for Fluorescence Plant Videos.
Yin, Xi; Liu, Xiaoming; Chen, Jin; Kramer, David M
2018-06-01
This paper proposes a novel framework for fluorescence plant video processing. The plant research community is interested in the leaf-level photosynthetic analysis within a plant. A prerequisite for such analysis is to segment all leaves, estimate their structures, and track them over time. We identify this as a joint multi-leaf segmentation, alignment, and tracking problem. First, leaf segmentation and alignment are applied on the last frame of a plant video to find a number of well-aligned leaf candidates. Second, leaf tracking is applied on the remaining frames with leaf candidate transformation from the previous frame. We form two optimization problems with shared terms in their objective functions for leaf alignment and tracking respectively. A quantitative evaluation framework is formulated to evaluate the performance of our algorithm with four metrics. Two models are learned to predict the alignment accuracy and detect tracking failure respectively in order to provide guidance for subsequent plant biology analysis. The limitation of our algorithm is also studied. Experimental results show the effectiveness, efficiency, and robustness of the proposed method.
Bellaïche, Yohanns; Bosveld, Floris; Graner, François; Mikula, Karol; Remesíková, Mariana; Smísek, Michal
2011-01-01
In this paper, we present a novel algorithm for tracking cells in time lapse confocal microscopy movie of a Drosophila epithelial tissue during pupal morphogenesis. We consider a 2D + time video as a 3D static image, where frames are stacked atop each other, and using a spatio-temporal segmentation algorithm we obtain information about spatio-temporal 3D tubes representing evolutions of cells. The main idea for tracking is the usage of two distance functions--first one from the cells in the initial frame and second one from segmented boundaries. We track the cells backwards in time. The first distance function attracts the subsequently constructed cell trajectories to the cells in the initial frame and the second one forces them to be close to centerlines of the segmented tubular structures. This makes our tracking algorithm robust against noise and missing spatio-temporal boundaries. This approach can be generalized to a 3D + time video analysis, where spatio-temporal tubes are 4D objects.
ERIC Educational Resources Information Center
Wang, Judy H.; Liang, Wenchi; Schwartz, Marc D.; Lee, Marion M.; Kreling, Barbara; Mandelblatt, Jeanne S.
2008-01-01
This study developed and evaluated a culturally tailored video guided by the health belief model to improve Chinese women's low rate of mammography use. Focus-group discussions and an advisory board meeting guided the video development. A 17-min video, including a soap opera and physician-recommendation segment, was made in Chinese languages. A…
Automated Music Video Generation Using Multi-level Feature-based Segmentation
NASA Astrophysics Data System (ADS)
Yoon, Jong-Chul; Lee, In-Kwon; Byun, Siwoo
The expansion of the home video market has created a requirement for video editing tools to allow ordinary people to assemble videos from short clips. However, professional skills are still necessary to create a music video, which requires a stream to be synchronized with pre-composed music. Because the music and the video are pre-generated in separate environments, even a professional producer usually requires a number of trials to obtain a satisfactory synchronization, which is something that most amateurs are unable to achieve.
Bilayer segmentation of webcam videos using tree-based classifiers.
Yin, Pei; Criminisi, Antonio; Winn, John; Essa, Irfan
2011-01-01
This paper presents an automatic segmentation algorithm for video frames captured by a (monocular) webcam that closely approximates depth segmentation from a stereo camera. The frames are segmented into foreground and background layers that comprise a subject (participant) and other objects and individuals. The algorithm produces correct segmentations even in the presence of large background motion with a nearly stationary foreground. This research makes three key contributions: First, we introduce a novel motion representation, referred to as "motons," inspired by research in object recognition. Second, we propose estimating the segmentation likelihood from the spatial context of motion. The estimation is efficiently learned by random forests. Third, we introduce a general taxonomy of tree-based classifiers that facilitates both theoretical and experimental comparisons of several known classification algorithms and generates new ones. In our bilayer segmentation algorithm, diverse visual cues such as motion, motion context, color, contrast, and spatial priors are fused by means of a conditional random field (CRF) model. Segmentation is then achieved by binary min-cut. Experiments on many sequences of our videochat application demonstrate that our algorithm, which requires no initialization, is effective in a variety of scenes, and the segmentation results are comparable to those obtained by stereo systems.
ERIC Educational Resources Information Center
Ludlow, Barbara L.; Foshay, John B.; Duff, Michael C.
Video presentations of teaching episodes in home, school, and community settings and audio recordings of parents' and professionals' views can be important adjuncts to personnel preparation in special education. This paper describes instructional applications of digital media and outlines steps in producing audio and video segments. Digital audio…
Self Occlusion and Disocclusion in Causal Video Object Segmentation
2015-12-18
computation is parameter- free in contrast to [4, 32, 10]. Taylor et al . [30] perform layer segmentation in longer video sequences leveraging occlusion cues...shows that our method recovers from errors in the first frame (short of failed detection). 4413 image ground truth Lee et al . [19] Grundman et al . [14...Ochs et al . [23] Taylor et al . [30] ours Figure 7. Sample Visual Results on FBMS-59. Comparison of various state-of-the-art methods. Only a single
Efficient depth intraprediction method for H.264/AVC-based three-dimensional video coding
NASA Astrophysics Data System (ADS)
Oh, Kwan-Jung; Oh, Byung Tae
2015-04-01
We present an intracoding method that is applicable to depth map coding in multiview plus depth systems. Our approach combines skip prediction and plane segmentation-based prediction. The proposed depth intraskip prediction uses the estimated direction at both the encoder and decoder, and does not need to encode residual data. Our plane segmentation-based intraprediction divides the current block into biregions, and applies a different prediction scheme for each segmented region. This method avoids incorrect estimations across different regions, resulting in higher prediction accuracy. Simulation results demonstrate that the proposed scheme is superior to H.264/advanced video coding intraprediction and has the ability to improve the subjective rendering quality.
ERIC Educational Resources Information Center
King, Keith; Laake, Rebecca A.; Bernard, Amy
2006-01-01
This study examined the sexual messages depicted in music videos aired on MTV, MTV2, BET, and GAC from August 2, 2004 to August 15, 2004. One-hour segments of music videos were taped daily for two weeks. Depictions of sexual attire and sexual behavior were analyzed via a four-page coding sheet (interrater-reliability = 0.93). Results indicated…
Bayesian Modeling of Temporal Coherence in Videos for Entity Discovery and Summarization.
Mitra, Adway; Biswas, Soma; Bhattacharyya, Chiranjib
2017-03-01
A video is understood by users in terms of entities present in it. Entity Discovery is the task of building appearance model for each entity (e.g., a person), and finding all its occurrences in the video. We represent a video as a sequence of tracklets, each spanning 10-20 frames, and associated with one entity. We pose Entity Discovery as tracklet clustering, and approach it by leveraging Temporal Coherence (TC): the property that temporally neighboring tracklets are likely to be associated with the same entity. Our major contributions are the first Bayesian nonparametric models for TC at tracklet-level. We extend Chinese Restaurant Process (CRP) to TC-CRP, and further to Temporally Coherent Chinese Restaurant Franchise (TC-CRF) to jointly model entities and temporal segments using mixture components and sparse distributions. For discovering persons in TV serial videos without meta-data like scripts, these methods show considerable improvement over state-of-the-art approaches to tracklet clustering in terms of clustering accuracy, cluster purity and entity coverage. The proposed methods can perform online tracklet clustering on streaming videos unlike existing approaches, and can automatically reject false tracklets. Finally we discuss entity-driven video summarization- where temporal segments of the video are selected based on the discovered entities, to create a semantically meaningful summary.
VIDEO MODELING BY EXPERTS WITH VIDEO FEEDBACK TO ENHANCE GYMNASTICS SKILLS
Boyer, Eva; Miltenberger, Raymond G; Batsche, Catherine; Fogel, Victoria
2009-01-01
The effects of combining video modeling by experts with video feedback were analyzed with 4 female competitive gymnasts (7 to 10 years old) in a multiple baseline design across behaviors. During the intervention, after the gymnast performed a specific gymnastics skill, she viewed a video segment showing an expert gymnast performing the same skill and then viewed a video replay of her own performance of the skill. The results showed that all gymnasts demonstrated improved performance across three gymnastics skills following exposure to the intervention. PMID:20514194
Video modeling by experts with video feedback to enhance gymnastics skills.
Boyer, Eva; Miltenberger, Raymond G; Batsche, Catherine; Fogel, Victoria
2009-01-01
The effects of combining video modeling by experts with video feedback were analyzed with 4 female competitive gymnasts (7 to 10 years old) in a multiple baseline design across behaviors. During the intervention, after the gymnast performed a specific gymnastics skill, she viewed a video segment showing an expert gymnast performing the same skill and then viewed a video replay of her own performance of the skill. The results showed that all gymnasts demonstrated improved performance across three gymnastics skills following exposure to the intervention.
Extraction of Blebs in Human Embryonic Stem Cell Videos.
Guan, Benjamin X; Bhanu, Bir; Talbot, Prue; Weng, Nikki Jo-Hao
2016-01-01
Blebbing is an important biological indicator in determining the health of human embryonic stem cells (hESC). Especially, areas of a bleb sequence in a video are often used to distinguish two cell blebbing behaviors in hESC: dynamic and apoptotic blebbings. This paper analyzes various segmentation methods for bleb extraction in hESC videos and introduces a bio-inspired score function to improve the performance in bleb extraction. Full bleb formation consists of bleb expansion and retraction. Blebs change their size and image properties dynamically in both processes and between frames. Therefore, adaptive parameters are needed for each segmentation method. A score function derived from the change of bleb area and orientation between consecutive frames is proposed which provides adaptive parameters for bleb extraction in videos. In comparison to manual analysis, the proposed method provides an automated fast and accurate approach for bleb sequence extraction.
2006-01-01
segments video game interaction into domain-independent components which together form a framework that can be used to characterize real-time interactive...multimedia applications in general and HRI in particular. We provide examples of using the components in both the video game and the Unmanned Aerial
ERIC Educational Resources Information Center
Ayala, Sandra M.
2010-01-01
Ten first grade students, participating in a Tier II response to intervention (RTI) reading program received an intervention of video self modeling to improve decoding skills and sight word recognition. The students were video recorded blending and segmenting decodable words, and reading sight words taken directly from their curriculum…
Brandes, Susanne; Mokhtari, Zeinab; Essig, Fabian; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo
2015-02-01
Time-lapse microscopy is an important technique to study the dynamics of various biological processes. The labor-intensive manual analysis of microscopy videos is increasingly replaced by automated segmentation and tracking methods. These methods are often limited to certain cell morphologies and/or cell stainings. In this paper, we present an automated segmentation and tracking framework that does not have these restrictions. In particular, our framework handles highly variable cell shapes and does not rely on any cell stainings. Our segmentation approach is based on a combination of spatial and temporal image variations to detect moving cells in microscopy videos. This method yields a sensitivity of 99% and a precision of 95% in object detection. The tracking of cells consists of different steps, starting from single-cell tracking based on a nearest-neighbor-approach, detection of cell-cell interactions and splitting of cell clusters, and finally combining tracklets using methods from graph theory. The segmentation and tracking framework was applied to synthetic as well as experimental datasets with varying cell densities implying different numbers of cell-cell interactions. We established a validation framework to measure the performance of our tracking technique. The cell tracking accuracy was found to be >99% for all datasets indicating a high accuracy for connecting the detected cells between different time points. Copyright © 2014 Elsevier B.V. All rights reserved.
Video rate color region segmentation for mobile robotic applications
NASA Astrophysics Data System (ADS)
de Cabrol, Aymeric; Bonnin, Patrick J.; Hugel, Vincent; Blazevic, Pierre; Chetto, Maryline
2005-08-01
Color Region may be an interesting image feature to extract for visual tasks in robotics, such as navigation and obstacle avoidance. But, whereas numerous methods are used for vision systems embedded on robots, only a few use this segmentation mainly because of the processing duration. In this paper, we propose a new real-time (ie. video rate) color region segmentation followed by a robust color classification and a merging of regions, dedicated to various applications such as RoboCup four-legged league or an industrial conveyor wheeled robot. Performances of this algorithm and confrontation with other methods, in terms of result quality and temporal performances are provided. For better quality results, the obtained speed up is between 2 and 4. For same quality results, the it is up to 10. We present also the outlines of the Dynamic Vision System of the CLEOPATRE Project - for which this segmentation has been developed - and the Clear Box Methodology which allowed us to create the new color region segmentation from the evaluation and the knowledge of other well known segmentations.
Audio-guided audiovisual data segmentation, indexing, and retrieval
NASA Astrophysics Data System (ADS)
Zhang, Tong; Kuo, C.-C. Jay
1998-12-01
While current approaches for video segmentation and indexing are mostly focused on visual information, audio signals may actually play a primary role in video content parsing. In this paper, we present an approach for automatic segmentation, indexing, and retrieval of audiovisual data, based on audio content analysis. The accompanying audio signal of audiovisual data is first segmented and classified into basic types, i.e., speech, music, environmental sound, and silence. This coarse-level segmentation and indexing step is based upon morphological and statistical analysis of several short-term features of the audio signals. Then, environmental sounds are classified into finer classes, such as applause, explosions, bird sounds, etc. This fine-level classification and indexing step is based upon time- frequency analysis of audio signals and the use of the hidden Markov model as the classifier. On top of this archiving scheme, an audiovisual data retrieval system is proposed. Experimental results show that the proposed approach has an accuracy rate higher than 90 percent for the coarse-level classification, and higher than 85 percent for the fine-level classification. Examples of audiovisual data segmentation and retrieval are also provided.
Segmented cold cathode display panel
NASA Technical Reports Server (NTRS)
Payne, Leslie (Inventor)
1998-01-01
The present invention is a video display device that utilizes the novel concept of generating an electronically controlled pattern of electron emission at the output of a segmented photocathode. This pattern of electron emission is amplified via a channel plate. The result is that an intense electronic image can be accelerated toward a phosphor thus creating a bright video image. This novel arrangement allows for one to provide a full color flat video display capable of implementation in large formats. In an alternate arrangement, the present invention is provided without the channel plate and a porous conducting surface is provided instead. In this alternate arrangement, the brightness of the image is reduced but the cost of the overall device is significantly lowered because fabrication complexity is significantly decreased.
Traffic Video Image Segmentation Model Based on Bayesian and Spatio-Temporal Markov Random Field
NASA Astrophysics Data System (ADS)
Zhou, Jun; Bao, Xu; Li, Dawei; Yin, Yongwen
2017-10-01
Traffic video image is a kind of dynamic image and its background and foreground is changed at any time, which results in the occlusion. In this case, using the general method is more difficult to get accurate image segmentation. A segmentation algorithm based on Bayesian and Spatio-Temporal Markov Random Field is put forward, which respectively build the energy function model of observation field and label field to motion sequence image with Markov property, then according to Bayesian' rule, use the interaction of label field and observation field, that is the relationship of label field’s prior probability and observation field’s likelihood probability, get the maximum posterior probability of label field’s estimation parameter, use the ICM model to extract the motion object, consequently the process of segmentation is finished. Finally, the segmentation methods of ST - MRF and the Bayesian combined with ST - MRF were analyzed. Experimental results: the segmentation time in Bayesian combined with ST-MRF algorithm is shorter than in ST-MRF, and the computing workload is small, especially in the heavy traffic dynamic scenes the method also can achieve better segmentation effect.
Hirano, Yutaka; Ikuta, Shin-Ichiro; Nakano, Manabu; Akiyama, Seita; Nakamura, Hajime; Nasu, Masataka; Saito, Futoshi; Nakagawa, Junichi; Matsuzaki, Masashi; Miyazaki, Shunichi
2007-02-01
Assessment of deterioration of regional wall motion by echocardiography is not only subjective but also features difficulties with interobserver agreement. Progress in digital communication technology has made it possible to send video images from a distant location via the Internet. The possibility of evaluating left ventricular wall motion using video images sent via the Internet to distant institutions was evaluated. Twenty-two subjects were randomly selected. Four sets of video images (parasternal long-axis view, parasternal short-axis view, apical four-chamber view, and apical two-chamber view) were taken for one cardiac cycle. The images were sent via the Internet to two institutions (observer C in facility A and observers D and E in facility B) for evaluation. Great care was taken to prevent disclosure of patient information to these observers. Parasternal long-axis images were divided into four segments, and the parasternal short-axis view, apical four-chamber view, and apical two-chamber view were divided into six segments. One of the following assessments, normokinesis, hypokinesis, akinesis, or dyskinesis, was assigned to each segment. The interobserver rates of agreement in judgments between observers C and D, observers C and E, and intraobserver agreement rate (for observer D) were calculated. The rate of interobserver agreement was 85.7% (394/460 segments; Kappa = 0.65) between observers C and D, 76.7% (353/460 segments; Kappa = 0.39) between observers D and E, and 76.3% (351/460 segments; Kappa = 0.36)between observers C and E, and intraobserver agreement was 94.3% (434/460; Kappa = 0.86). Segments of difference judgments between observers C and D were normokinesis-hypokinesis; 62.1%, hypokinesis-akinesis; 33.3%, akinesis-dyskinesis; 3.0%, and normokinesis-akinesis; 1.5%. Wall motion can be evaluated at remote institutions via the Internet.
Activity Detection and Retrieval for Image and Video Data with Limited Training
2015-06-10
applications. Here we propose two techniques for image segmentation. The first involves an automata based multiple threshold selection scheme, where a... automata . For our second approach to segmentation, we employ a region based segmentation technique that is capable of handling intensity inhomogeneity...techniques for image segmentation. The first involves an automata based multiple threshold selection scheme, where a mixture of Gaussian is fitted to the
ERIC Educational Resources Information Center
Eick, Charles Joseph; King, David T., Jr.
2012-01-01
The instructor of an integrated science course for nonscience majors embedded content-related video segments from YouTube and other similar internet sources into lecture. Through this study, the instructor wanted to know students' perceptions of how video use engaged them and increased their interest and understanding of science. Written survey…
Testing with feedback improves recall of information in informed consent: A proof of concept study.
Roberts, Katherine J; Revenson, Tracey A; Urken, Mark L; Fleszar, Sara; Cipollina, Rebecca; Rowe, Meghan E; Reis, Laura L Dos; Lepore, Stephen J
2016-08-01
This study investigates whether applying educational testing approaches to an informed consent video for a medical procedure can lead to greater recall of the information presented. Undergraduate students (n=120) were randomly assigned to watch a 20-min video on informed consent under one of three conditions: 1) tested using multiple-choice knowledge questions and provided with feedback on their answers after each 5-min segment; 2) tested with multiple choice knowledge questions but not provided feedback after each segment; or 3) watched the video without knowledge testing. Participants who were tested and provided feedback had significantly greater information recall compared to those who were tested but not provided feedback and to those not tested. The effect of condition was stronger for moderately difficult questions versus easy questions. Inserting knowledge tests and providing feedback about the responses at timed intervals in videos can be effective in improving recall of information. Providing informed consent information through a video not only standardizes the material, but using testing with feedback inserted within the video has the potential to increase recall and retention of this material. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Validity and reliability of naturalistic driving scene categorization Judgments from crowdsourcing.
Cabrall, Christopher D D; Lu, Zhenji; Kyriakidis, Miltos; Manca, Laura; Dijksterhuis, Chris; Happee, Riender; de Winter, Joost
2018-05-01
A common challenge with processing naturalistic driving data is that humans may need to categorize great volumes of recorded visual information. By means of the online platform CrowdFlower, we investigated the potential of crowdsourcing to categorize driving scene features (i.e., presence of other road users, straight road segments, etc.) at greater scale than a single person or a small team of researchers would be capable of. In total, 200 workers from 46 different countries participated in 1.5days. Validity and reliability were examined, both with and without embedding researcher generated control questions via the CrowdFlower mechanism known as Gold Test Questions (GTQs). By employing GTQs, we found significantly more valid (accurate) and reliable (consistent) identification of driving scene items from external workers. Specifically, at a small scale CrowdFlower Job of 48 three-second video segments, an accuracy (i.e., relative to the ratings of a confederate researcher) of 91% on items was found with GTQs compared to 78% without. A difference in bias was found, where without GTQs, external workers returned more false positives than with GTQs. At a larger scale CrowdFlower Job making exclusive use of GTQs, 12,862 three-second video segments were released for annotation. Infeasible (and self-defeating) to check the accuracy of each at this scale, a random subset of 1012 categorizations was validated and returned similar levels of accuracy (95%). In the small scale Job, where full video segments were repeated in triplicate, the percentage of unanimous agreement on the items was found significantly more consistent when using GTQs (90%) than without them (65%). Additionally, in the larger scale Job (where a single second of a video segment was overlapped by ratings of three sequentially neighboring segments), a mean unanimity of 94% was obtained with validated-as-correct ratings and 91% with non-validated ratings. Because the video segments overlapped in full for the small scale Job, and in part for the larger scale Job, it should be noted that such reliability reported here may not be directly comparable. Nonetheless, such results are both indicative of high levels of obtained rating reliability. Overall, our results provide compelling evidence for CrowdFlower, via use of GTQs, being able to yield more accurate and consistent crowdsourced categorizations of naturalistic driving scene contents than when used without such a control mechanism. Such annotations in such short periods of time present a potentially powerful resource in driving research and driving automation development. Copyright © 2017 Elsevier Ltd. All rights reserved.
Doulamis, A; Doulamis, N; Ntalianis, K; Kollias, S
2003-01-01
In this paper, an unsupervised video object (VO) segmentation and tracking algorithm is proposed based on an adaptable neural-network architecture. The proposed scheme comprises: 1) a VO tracking module and 2) an initial VO estimation module. Object tracking is handled as a classification problem and implemented through an adaptive network classifier, which provides better results compared to conventional motion-based tracking algorithms. Network adaptation is accomplished through an efficient and cost effective weight updating algorithm, providing a minimum degradation of the previous network knowledge and taking into account the current content conditions. A retraining set is constructed and used for this purpose based on initial VO estimation results. Two different scenarios are investigated. The first concerns extraction of human entities in video conferencing applications, while the second exploits depth information to identify generic VOs in stereoscopic video sequences. Human face/ body detection based on Gaussian distributions is accomplished in the first scenario, while segmentation fusion is obtained using color and depth information in the second scenario. A decision mechanism is also incorporated to detect time instances for weight updating. Experimental results and comparisons indicate the good performance of the proposed scheme even in sequences with complicated content (object bending, occlusion).
A motion compensation technique using sliced blocks and its application to hybrid video coding
NASA Astrophysics Data System (ADS)
Kondo, Satoshi; Sasai, Hisao
2005-07-01
This paper proposes a new motion compensation method using "sliced blocks" in DCT-based hybrid video coding. In H.264 ? MPEG-4 Advance Video Coding, a brand-new international video coding standard, motion compensation can be performed by splitting macroblocks into multiple square or rectangular regions. In the proposed method, on the other hand, macroblocks or sub-macroblocks are divided into two regions (sliced blocks) by an arbitrary line segment. The result is that the shapes of the segmented regions are not limited to squares or rectangles, allowing the shapes of the segmented regions to better match the boundaries between moving objects. Thus, the proposed method can improve the performance of the motion compensation. In addition, adaptive prediction of the shape according to the region shape of the surrounding macroblocks can reduce overheads to describe shape information in the bitstream. The proposed method also has the advantage that conventional coding techniques such as mode decision using rate-distortion optimization can be utilized, since coding processes such as frequency transform and quantization are performed on a macroblock basis, similar to the conventional coding methods. The proposed method is implemented in an H.264-based P-picture codec and an improvement in bit rate of 5% is confirmed in comparison with H.264.
Highlight summarization in golf videos using audio signals
NASA Astrophysics Data System (ADS)
Kim, Hyoung-Gook; Kim, Jin Young
2008-01-01
In this paper, we present an automatic summarization of highlights in golf videos based on audio information alone without video information. The proposed highlight summarization system is carried out based on semantic audio segmentation and detection on action units from audio signals. Studio speech, field speech, music, and applause are segmented by means of sound classification. Swing is detected by the methods of impulse onset detection. Sounds like swing and applause form a complete action unit, while studio speech and music parts are used to anchor the program structure. With the advantage of highly precise detection of applause, highlights are extracted effectively. Our experimental results obtain high classification precision on 18 golf games. It proves that the proposed system is very effective and computationally efficient to apply the technology to embedded consumer electronic devices.
Wang, Lizhu; Brenden, Travis; Cao, Yong; Seelbach, Paul
2012-11-01
Identifying appropriate spatial scales is critically important for assessing health, attributing data, and guiding management actions for rivers. We describe a process for identifying a three-level hierarchy of spatial scales for Michigan rivers. Additionally, we conduct a variance decomposition of fish occurrence, abundance, and assemblage metric data to evaluate how much observed variability can be explained by the three spatial scales as a gage of their utility for water resources and fisheries management. The process involved the development of geographic information system programs, statistical models, modification by experienced biologists, and simplification to meet the needs of policy makers. Altogether, 28,889 reaches, 6,198 multiple-reach segments, and 11 segment classes were identified from Michigan river networks. The segment scale explained the greatest amount of variation in fish abundance and occurrence, followed by segment class, and reach. Segment scale also explained the greatest amount of variation in 13 of the 19 analyzed fish assemblage metrics, with segment class explaining the greatest amount of variation in the other six fish metrics. Segments appear to be a useful spatial scale/unit for measuring and synthesizing information for managing rivers and streams. Additionally, segment classes provide a useful typology for summarizing the numerous segments into a few categories. Reaches are the foundation for the identification of segments and segment classes and thus are integral elements of the overall spatial scale hierarchy despite reaches not explaining significant variation in fish assemblage data.
Depth Extraction from Videos Using Geometric Context and Occlusion Boundaries (Open Access)
2014-09-05
RAZA ET AL .: DEPTH EXTRACTION FROM VIDEOS 1 Depth Extraction from Videos Using Geometric Context and Occlusion Boundaries S. Hussain Raza1...electronic forms. ar X iv :1 51 0. 07 31 7v 1 [ cs .C V ] 2 5 O ct 2 01 5 2 RAZA ET AL .: DEPTH EXTRACTION FROM VIDEOS Frame Ground Truth Depth...temporal segmentation using the method proposed by Grundmann et al . [4]. estimation and triangulation to estimate depth maps [17, 27](see Figure 1). In
Schittek Janda, M; Tani Botticelli, A; Mattheos, N; Nebel, D; Wagner, A; Nattestad, A; Attström, R
2005-05-01
Video-based instructions for clinical procedures have been used frequently during the preceding decades. To investigate in a randomised controlled trial the learning effectiveness of fragmented videos vs. the complete sequential video and to analyse the attitudes of the user towards video as a learning aid. An instructional video on surgical hand wash was produced. The video was available in two different forms in two separate web pages: one as a sequential video and one fragmented into eight short clips. Twenty-eight dental students in the second semester were randomised into an experimental (n = 15) and a control group (n = 13). The experimental group used the fragmented form of the video and the control group watched the complete one. The use of the videos was logged and the students were video taped whilst undertaking a test hand wash. The videos were analysed systematically and blindly by two independent clinicians. The students also performed a written test concerning learning outcome from the videos as well as they answered an attitude questionnaire. The students in the experimental group watched the video significantly longer than the control group. There were no significant differences between the groups with regard to the ratings and scores when performing the hand wash. The experimental group had significantly better results in the written test compared with those of the control group. There was no significant difference between the groups with regard to attitudes towards the use of video for learning, as measured by the Visual Analogue Scales. Most students in both groups expressed satisfaction with the use of video for learning. The students demonstrated positive attitudes and acceptable learning outcome from viewing CAL videos as a part of their pre-clinical training. Videos that are part of computer-based learning settings would ideally be presented to the students both as a segmented and as a whole video to give the students the option to choose the form of video which suits the individual student's learning style.
NASA Astrophysics Data System (ADS)
Hidalgo-Aguirre, Maribel; Gitelman, Julian; Lesk, Mark Richard; Costantino, Santiago
2015-11-01
Optical coherence tomography (OCT) imaging has become a standard diagnostic tool in ophthalmology, providing essential information associated with various eye diseases. In order to investigate the dynamics of the ocular fundus, we present a simple and accurate automated algorithm to segment the inner limiting membrane in video-rate optic nerve head spectral domain (SD) OCT images. The method is based on morphological operations including a two-step contrast enhancement technique, proving to be very robust when dealing with low signal-to-noise ratio images and pathological eyes. An analysis algorithm was also developed to measure neuroretinal tissue deformation from the segmented retinal profiles. The performance of the algorithm is demonstrated, and deformation results are presented for healthy and glaucomatous eyes.
Extraction of composite visual objects from audiovisual materials
NASA Astrophysics Data System (ADS)
Durand, Gwenael; Thienot, Cedric; Faudemay, Pascal
1999-08-01
An effective analysis of Visual Objects appearing in still images and video frames is required in order to offer fine grain access to multimedia and audiovisual contents. In previous papers, we showed how our method for segmenting still images into visual objects could improve content-based image retrieval and video analysis methods. Visual Objects are used in particular for extracting semantic knowledge about the contents. However, low-level segmentation methods for still images are not likely to extract a complex object as a whole but instead as a set of several sub-objects. For example, a person would be segmented into three visual objects: a face, hair, and a body. In this paper, we introduce the concept of Composite Visual Object. Such an object is hierarchically composed of sub-objects called Component Objects.
Use of videos for Distribution Construction and Maintenance (DC M) training
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, G.M.
This paper presents the results of a survey taken among members of the American Gas Association (AGA)'s Distribution Construction and Maintenance (DC M) committee to gauge the extent, sources, mode of use, and degree of satisfaction with videos as a training aid in distribution construction and maintenance skills. Also cites AGA Engineering Technical Note, DCM-88-3-1, as a catalog of the videos listed by respondents to the survey. Comments on the various sources of training videos and the characteristics of videos from each. Conference presentation included showing of a sampling of video segments from these various sources. 1 fig.
Classification and Weakly Supervised Pain Localization using Multiple Segment Representation.
Sikka, Karan; Dhall, Abhinav; Bartlett, Marian Stewart
2014-10-01
Automatic pain recognition from videos is a vital clinical application and, owing to its spontaneous nature, poses interesting challenges to automatic facial expression recognition (AFER) research. Previous pain vs no-pain systems have highlighted two major challenges: (1) ground truth is provided for the sequence, but the presence or absence of the target expression for a given frame is unknown, and (2) the time point and the duration of the pain expression event(s) in each video are unknown. To address these issues we propose a novel framework (referred to as MS-MIL) where each sequence is represented as a bag containing multiple segments, and multiple instance learning (MIL) is employed to handle this weakly labeled data in the form of sequence level ground-truth. These segments are generated via multiple clustering of a sequence or running a multi-scale temporal scanning window, and are represented using a state-of-the-art Bag of Words (BoW) representation. This work extends the idea of detecting facial expressions through 'concept frames' to 'concept segments' and argues through extensive experiments that algorithms such as MIL are needed to reap the benefits of such representation. The key advantages of our approach are: (1) joint detection and localization of painful frames using only sequence-level ground-truth, (2) incorporation of temporal dynamics by representing the data not as individual frames but as segments, and (3) extraction of multiple segments, which is well suited to signals with uncertain temporal location and duration in the video. Extensive experiments on UNBC-McMaster Shoulder Pain dataset highlight the effectiveness of the approach by achieving competitive results on both tasks of pain classification and localization in videos. We also empirically evaluate the contributions of different components of MS-MIL. The paper also includes the visualization of discriminative facial patches, important for pain detection, as discovered by our algorithm and relates them to Action Units that have been associated with pain expression. We conclude the paper by demonstrating that MS-MIL yields a significant improvement on another spontaneous facial expression dataset, the FEEDTUM dataset.
Small Moving Vehicle Detection in a Satellite Video of an Urban Area
Yang, Tao; Wang, Xiwen; Yao, Bowei; Li, Jing; Zhang, Yanning; He, Zhannan; Duan, Wencheng
2016-01-01
Vehicle surveillance of a wide area allows us to learn much about the daily activities and traffic information. With the rapid development of remote sensing, satellite video has become an important data source for vehicle detection, which provides a broader field of surveillance. The achieved work generally focuses on aerial video with moderately-sized objects based on feature extraction. However, the moving vehicles in satellite video imagery range from just a few pixels to dozens of pixels and exhibit low contrast with respect to the background, which makes it hard to get available appearance or shape information. In this paper, we look into the problem of moving vehicle detection in satellite imagery. To the best of our knowledge, it is the first time to deal with moving vehicle detection from satellite videos. Our approach consists of two stages: first, through foreground motion segmentation and trajectory accumulation, the scene motion heat map is dynamically built. Following this, a novel saliency based background model which intensifies moving objects is presented to segment the vehicles in the hot regions. Qualitative and quantitative experiments on sequence from a recent Skybox satellite video dataset demonstrates that our approach achieves a high detection rate and low false alarm simultaneously. PMID:27657091
ETHOWATCHER: validation of a tool for behavioral and video-tracking analysis in laboratory animals.
Crispim Junior, Carlos Fernando; Pederiva, Cesar Nonato; Bose, Ricardo Chessini; Garcia, Vitor Augusto; Lino-de-Oliveira, Cilene; Marino-Neto, José
2012-02-01
We present a software (ETHOWATCHER(®)) developed to support ethography, object tracking and extraction of kinematic variables from digital video files of laboratory animals. The tracking module allows controlled segmentation of the target from the background, extracting image attributes used to calculate the distance traveled, orientation, length, area and a path graph of the experimental animal. The ethography module allows recording of catalog-based behaviors from environment or from video files continuously or frame-by-frame. The output reports duration, frequency and latency of each behavior and the sequence of events in a time-segmented format, set by the user. Validation tests were conducted on kinematic measurements and on the detection of known behavioral effects of drugs. This software is freely available at www.ethowatcher.ufsc.br. Copyright © 2011 Elsevier Ltd. All rights reserved.
Object class segmentation of RGB-D video using recurrent convolutional neural networks.
Pavel, Mircea Serban; Schulz, Hannes; Behnke, Sven
2017-04-01
Object class segmentation is a computer vision task which requires labeling each pixel of an image with the class of the object it belongs to. Deep convolutional neural networks (DNN) are able to learn and take advantage of local spatial correlations required for this task. They are, however, restricted by their small, fixed-sized filters, which limits their ability to learn long-range dependencies. Recurrent Neural Networks (RNN), on the other hand, do not suffer from this restriction. Their iterative interpretation allows them to model long-range dependencies by propagating activity. This property is especially useful when labeling video sequences, where both spatial and temporal long-range dependencies occur. In this work, a novel RNN architecture for object class segmentation is presented. We investigate several ways to train such a network. We evaluate our models on the challenging NYU Depth v2 dataset for object class segmentation and obtain competitive results. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Locatis, Craig; And Others
1990-01-01
Discusses methods for incorporating video into hypermedia programs. Knowledge representation in hypermedia is explained; video production techniques are discussed; comparisons between linear video, interactive video, and hypervideo are presented; appropriate conditions for hypervideo use are examined; and a need for new media research is…
From image captioning to video summary using deep recurrent networks and unsupervised segmentation
NASA Astrophysics Data System (ADS)
Morosanu, Bogdan-Andrei; Lemnaru, Camelia
2018-04-01
Automatic captioning systems based on recurrent neural networks have been tremendously successful at providing realistic natural language captions for complex and varied image data. We explore methods for adapting existing models trained on large image caption data sets to a similar problem, that of summarising videos using natural language descriptions and frame selection. These architectures create internal high level representations of the input image that can be used to define probability distributions and distance metrics on these distributions. Specifically, we interpret each hidden unit inside a layer of the caption model as representing the un-normalised log probability of some unknown image feature of interest for the caption generation process. We can then apply well understood statistical divergence measures to express the difference between images and create an unsupervised segmentation of video frames, classifying consecutive images of low divergence as belonging to the same context, and those of high divergence as belonging to different contexts. To provide a final summary of the video, we provide a group of selected frames and a text description accompanying them, allowing a user to perform a quick exploration of large unlabeled video databases.
Automatic multiple zebrafish larvae tracking in unconstrained microscopic video conditions.
Wang, Xiaoying; Cheng, Eva; Burnett, Ian S; Huang, Yushi; Wlodkowic, Donald
2017-12-14
The accurate tracking of zebrafish larvae movement is fundamental to research in many biomedical, pharmaceutical, and behavioral science applications. However, the locomotive characteristics of zebrafish larvae are significantly different from adult zebrafish, where existing adult zebrafish tracking systems cannot reliably track zebrafish larvae. Further, the far smaller size differentiation between larvae and the container render the detection of water impurities inevitable, which further affects the tracking of zebrafish larvae or require very strict video imaging conditions that typically result in unreliable tracking results for realistic experimental conditions. This paper investigates the adaptation of advanced computer vision segmentation techniques and multiple object tracking algorithms to develop an accurate, efficient and reliable multiple zebrafish larvae tracking system. The proposed system has been tested on a set of single and multiple adult and larvae zebrafish videos in a wide variety of (complex) video conditions, including shadowing, labels, water bubbles and background artifacts. Compared with existing state-of-the-art and commercial multiple organism tracking systems, the proposed system improves the tracking accuracy by up to 31.57% in unconstrained video imaging conditions. To facilitate the evaluation on zebrafish segmentation and tracking research, a dataset with annotated ground truth is also presented. The software is also publicly accessible.
Linguistic Characteristics of Individuals with High Functioning Autism and Asperger Syndrome
ERIC Educational Resources Information Center
Seung, Hye Kyeung
2007-01-01
This study examined the linguistic characteristics of high functioning individuals with autism and Asperger syndrome. Each group consisted of 10 participants who were matched on sex, chronological age, and intelligence scores. Participants generated a narrative after watching a brief video segment of the Social Attribution Task video. Each…
Subjective evaluation of H.265/HEVC based dynamic adaptive video streaming over HTTP (HEVC-DASH)
NASA Astrophysics Data System (ADS)
Irondi, Iheanyi; Wang, Qi; Grecos, Christos
2015-02-01
The Dynamic Adaptive Streaming over HTTP (DASH) standard is becoming increasingly popular for real-time adaptive HTTP streaming of internet video in response to unstable network conditions. Integration of DASH streaming techniques with the new H.265/HEVC video coding standard is a promising area of research. The performance of HEVC-DASH systems has been previously evaluated by a few researchers using objective metrics, however subjective evaluation would provide a better measure of the user's Quality of Experience (QoE) and overall performance of the system. This paper presents a subjective evaluation of an HEVC-DASH system implemented in a hardware testbed. Previous studies in this area have focused on using the current H.264/AVC (Advanced Video Coding) or H.264/SVC (Scalable Video Coding) codecs and moreover, there has been no established standard test procedure for the subjective evaluation of DASH adaptive streaming. In this paper, we define a test plan for HEVC-DASH with a carefully justified data set employing longer video sequences that would be sufficient to demonstrate the bitrate switching operations in response to various network condition patterns. We evaluate the end user's real-time QoE online by investigating the perceived impact of delay, different packet loss rates, fluctuating bandwidth, and the perceived quality of using different DASH video stream segment sizes on a video streaming session using different video sequences. The Mean Opinion Score (MOS) results give an insight into the performance of the system and expectation of the users. The results from this study show the impact of different network impairments and different video segments on users' QoE and further analysis and study may help in optimizing system performance.
Adding Feminist Therapy to Videotape Demonstrations.
ERIC Educational Resources Information Center
Konrad, Jennifer L.; Yoder, Janice D.
2000-01-01
Provides directions for presenting a 32-minute series of four videotape segments that highlights the fundamental features of four approaches to psychotherapy, extending its reach to include a feminist perspective. Describes the approaches and included segments. Reports that students' comments demonstrate that the video sequence provided a helpful…
What Makes a Message Stick? The Role of Content and Context in Social Media Epidemics
2013-09-23
First, we propose visual memes , or frequently re-posted short video segments, for detecting and monitoring latent video interactions at scale. Content...interactions (such as quoting, or remixing, parts of a video). Visual memes are extracted by scalable detection algorithms that we develop, with...high accuracy. We further augment visual memes with text, via a statistical model of latent topics. We model content interactions on YouTube with
Lederman Science Center: Physicists Explain Exhibits
Adventures - Calendar - About - FAQ - Fermilab Friends - Fermilab Home Fermilab Office of Education & . Lederman Science Adventures Teacher Resource Center video video video video video Welcome Accelerators Maintainer: ed-webmaster@fnal.gov Lederman Science Education Center Fermilab MS 777 Box 500 Batavia, IL 60510
Robust and efficient fiducial tracking for augmented reality in HD-laparoscopic video streams
NASA Astrophysics Data System (ADS)
Mueller, M.; Groch, A.; Baumhauer, M.; Maier-Hein, L.; Teber, D.; Rassweiler, J.; Meinzer, H.-P.; Wegner, In.
2012-02-01
Augmented Reality (AR) is a convenient way of porting information from medical images into the surgical field of view and can deliver valuable assistance to the surgeon, especially in laparoscopic procedures. In addition, high definition (HD) laparoscopic video devices are a great improvement over the previously used low resolution equipment. However, in AR applications that rely on real-time detection of fiducials from video streams, the demand for efficient image processing has increased due to the introduction of HD devices. We present an algorithm based on the well-known Conditional Density Propagation (CONDENSATION) algorithm which can satisfy these new demands. By incorporating a prediction around an already existing and robust segmentation algorithm, we can speed up the whole procedure while leaving the robustness of the fiducial segmentation untouched. For evaluation purposes we tested the algorithm on recordings from real interventions, allowing for a meaningful interpretation of the results. Our results show that we can accelerate the segmentation by a factor of 3.5 on average. Moreover, the prediction information can be used to compensate for fiducials that are temporarily occluded or out of scope, providing greater stability.
Video Comprehensibility and Attention in Very Young Children
Pempek, Tiffany A.; Kirkorian, Heather L.; Richards, John E.; Anderson, Daniel R.; Lund, Anne F.; Stevens, Michael
2010-01-01
Earlier research established that preschool children pay less attention to television that is sequentially or linguistically incomprehensible. This study determines the youngest age for which this effect can be found. One-hundred and three 6-, 12-, 18-, and 24-month-olds’ looking and heart rate were recorded while they watched Teletubbies, a television program designed for very young children. Comprehensibility was manipulated by either randomly ordering shots or reversing dialogue to become backward speech. Infants watched one normal segment and one distorted version of the same segment. Only 24-month-olds, and to some extent 18-month-olds, distinguished between normal and distorted video by looking for longer durations towards the normal stimuli. The results suggest that it may not be until the middle of the second year that children demonstrate the earliest beginnings of comprehension of video as it is currently produced. PMID:20822238
A clinical pilot study of a modular video-CT augmentation system for image-guided skull base surgery
NASA Astrophysics Data System (ADS)
Liu, Wen P.; Mirota, Daniel J.; Uneri, Ali; Otake, Yoshito; Hager, Gregory; Reh, Douglas D.; Ishii, Masaru; Gallia, Gary L.; Siewerdsen, Jeffrey H.
2012-02-01
Augmentation of endoscopic video with preoperative or intraoperative image data [e.g., planning data and/or anatomical segmentations defined in computed tomography (CT) and magnetic resonance (MR)], can improve navigation, spatial orientation, confidence, and tissue resection in skull base surgery, especially with respect to critical neurovascular structures that may be difficult to visualize in the video scene. This paper presents the engineering and evaluation of a video augmentation system for endoscopic skull base surgery translated to use in a clinical study. Extension of previous research yielded a practical system with a modular design that can be applied to other endoscopic surgeries, including orthopedic, abdominal, and thoracic procedures. A clinical pilot study is underway to assess feasibility and benefit to surgical performance by overlaying CT or MR planning data in realtime, high-definition endoscopic video. Preoperative planning included segmentation of the carotid arteries, optic nerves, and surgical target volume (e.g., tumor). An automated camera calibration process was developed that demonstrates mean re-projection accuracy (0.7+/-0.3) pixels and mean target registration error of (2.3+/-1.5) mm. An IRB-approved clinical study involving fifteen patients undergoing skull base tumor surgery is underway in which each surgery includes the experimental video-CT system deployed in parallel to the standard-of-care (unaugmented) video display. Questionnaires distributed to one neurosurgeon and two otolaryngologists are used to assess primary outcome measures regarding the benefit to surgical confidence in localizing critical structures and targets by means of video overlay during surgical approach, resection, and reconstruction.
Hierarchical video summarization based on context clustering
NASA Astrophysics Data System (ADS)
Tseng, Belle L.; Smith, John R.
2003-11-01
A personalized video summary is dynamically generated in our video personalization and summarization system based on user preference and usage environment. The three-tier personalization system adopts the server-middleware-client architecture in order to maintain, select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. In this paper, the metadata includes visual semantic annotations and automatic speech transcriptions. Our personalization and summarization engine in the middleware selects the optimal set of desired video segments by matching shot annotations and sentence transcripts with user preferences. Besides finding the desired contents, the objective is to present a coherent summary. There are diverse methods for creating summaries, and we focus on the challenges of generating a hierarchical video summary based on context information. In our summarization algorithm, three inputs are used to generate the hierarchical video summary output. These inputs are (1) MPEG-7 metadata descriptions of the contents in the server, (2) user preference and usage environment declarations from the user client, and (3) context information including MPEG-7 controlled term list and classification scheme. In a video sequence, descriptions and relevance scores are assigned to each shot. Based on these shot descriptions, context clustering is performed to collect consecutively similar shots to correspond to hierarchical scene representations. The context clustering is based on the available context information, and may be derived from domain knowledge or rules engines. Finally, the selection of structured video segments to generate the hierarchical summary efficiently balances between scene representation and shot selection.
The Great War. [Teaching Materials].
ERIC Educational Resources Information Center
Public Broadcasting Service, Washington, DC.
This package of teaching materials is intended to accompany an eight-part film series entitled "The Great War" (i.e., World War I), produced for public television. The package consists of a "teacher's guide,""video segment index,""student resource" materials, and approximately 40 large photographs. The video series is not a war story of battles,…
Optimizing Instructional Video for Preservice Teachers in an Online Technology Integration Course
ERIC Educational Resources Information Center
Ibrahim, Mohamed; Callaway, Rebecca; Bell, David
2014-01-01
This study assessed the effect of design instructional video based on the Cognitive Theory of Multimedia Learning by applying segmentation and signaling on the learning outcome of students in an online technology integration course. The study assessed the correlation between students' personal preferences (preferred learning styles and area…
ERIC Educational Resources Information Center
di Giura, Marcella Beacco
1994-01-01
The problems and value of television as instructional material for the second-language classroom are discussed, and a new videocassette series produced by the journal "Francais dans le Monde" is described. Criteria for topic and segment selection are outlined, and suggestions are made for classroom use. (MSE)
Evolving discriminators for querying video sequences
NASA Astrophysics Data System (ADS)
Iyengar, Giridharan; Lippman, Andrew B.
1997-01-01
In this paper we present a framework for content based query and retrieval of information from large video databases. This framework enables content based retrieval of video sequences by characterizing the sequences using motion, texture and colorimetry cues. This characterization is biologically inspired and results in a compact parameter space where every segment of video is represented by an 8 dimensional vector. Searching and retrieval is done in real- time with accuracy in this parameter space. Using this characterization, we then evolve a set of discriminators using Genetic Programming Experiments indicate that these discriminators are capable of analyzing and characterizing video. The VideoBook is able to search and retrieve video sequences with 92% accuracy in real-time. Experiments thus demonstrate that the characterization is capable of extracting higher level structure from raw pixel values.
Content-based management service for medical videos.
Mendi, Engin; Bayrak, Coskun; Cecen, Songul; Ermisoglu, Emre
2013-01-01
Development of health information technology has had a dramatic impact to improve the efficiency and quality of medical care. Developing interoperable health information systems for healthcare providers has the potential to improve the quality and equitability of patient-centered healthcare. In this article, we describe an automated content-based medical video analysis and management service that provides convenience and ease in accessing the relevant medical video content without sequential scanning. The system facilitates effective temporal video segmentation and content-based visual information retrieval that enable a more reliable understanding of medical video content. The system is implemented as a Web- and mobile-based service and has the potential to offer a knowledge-sharing platform for the purpose of efficient medical video content access.
The Webb Telescope's Actuators: Curving Mirrors in Space
2017-12-08
NASA image release December 9, 2010 Caption: The James Webb Space Telescope's Engineering Design Unit (EDU) primary mirror segment, coated with gold by Quantum Coating Incorporated. The actuator is located behind the mirror. Credit: Photo by Drew Noel NASA's James Webb Space Telescope is a wonder of modern engineering. As the planned successor to the Hubble Space telescope, even the smallest of parts on this giant observatory will play a critical role in its performance. A new video takes viewers behind the Webb's mirrors to investigate "actuators," one component that will help Webb focus on some of the earliest objects in the universe. The video called "Got Your Back" is part of an on-going video series about the Webb telescope called "Behind the Webb." It was produced at the Space Telescope Science Institute (STScI) in Baltimore, Md. and takes viewers behind the scenes with scientists and engineers who are creating the Webb telescope's components. During the 3 minute and 12 second video, STScI host Mary Estacion interviewed people involved in the project at Ball Aerospace in Boulder, Colo. and showed the actuators in action. The Webb telescope will study every phase in the history of our universe, ranging from the first luminous glows after the big bang, to the formation of solar systems capable of supporting life on planets like Earth, to the evolution of our own solar system. Measuring the light this distant light requires a primary mirror 6.5 meters (21 feet 4 inches) across – six times larger than the Hubble Space telescope’s mirror! Launching a mirror this large into space isn’t feasible. Instead, Webb engineers and scientists innovated a unique solution – building 18 mirrors that will act in unison as one large mirror. These mirrors are packaged together into three sections that fold up - much easier to fit inside a rocket. Each mirror is made from beryllium and weighs approximately 20 kilograms (46 pounds). Once in space, getting these mirrors to focus correctly on faraway galaxies is another challenge entirely. Actuators, or tiny mechanical motors, provide the answer to achieving a single perfect focus. The primary and secondary mirror segments are both moved by six actuators that are attached to the back of the mirrors. The primary segment has an additional actuator at the center of the mirror that adjusts its curvature. The third mirror segment remains stationary. Lee Feinberg, Webb Optical Telescope Element Manager at NASA's Goddard Space Flight Center in Greenbelt, Md. explained "Aligning the primary mirror segments as though they are a single large mirror means each mirror is aligned to 1/10,000th the thickness of a human hair. This alignment has to be done at 50 degrees above absolute zero! What's even more amazing is that the engineers and scientists working on the Webb telescope literally had to invent how to do this." With the actuators in place, Brad Shogrin, Webb Telescope Manager at Ball Aerospace, Boulder, Colo, details the next step: attaching the hexapod (meaning six-footed) assembly and radius of curvature subsystem (ROC). "Radius of curvature" refers to the distance to the center point of the curvature of the mirror. Feinberg added "To understand the concept in a more basic sense, if you change that radius of curvature, you change the mirror's focus." The "Behind the Webb" video series is available in HQ, large and small Quicktime formats, HD, Large and Small WMV formats, and HD, Large and Small Xvid formats. To see the actuators being attached to the back of a telescope mirror in this new "Behind the Webb" video, visit: webbtelescope.org/webb_telescope/behind_the_webb/7 For more information about Webb's mirrors, visit: www.jwst.nasa.gov/mirrors.html For more information on the James Webb Space Telescope, visit: jwst.nasa.gov Rob Gutro NASA's Goddard Space Flight Center, Greenbelt, Md. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Join us on Facebook
Fully Automatic Segmentation of Fluorescein Leakage in Subjects With Diabetic Macular Edema
Rabbani, Hossein; Allingham, Michael J.; Mettu, Priyatham S.; Cousins, Scott W.; Farsiu, Sina
2015-01-01
Purpose. To create and validate software to automatically segment leakage area in real-world clinical fluorescein angiography (FA) images of subjects with diabetic macular edema (DME). Methods. Fluorescein angiography images obtained from 24 eyes of 24 subjects with DME were retrospectively analyzed. Both video and still-frame images were obtained using a Heidelberg Spectralis 6-mode HRA/OCT unit. We aligned early and late FA frames in the video by a two-step nonrigid registration method. To remove background artifacts, we subtracted early and late FA frames. Finally, after postprocessing steps, including detection and inpainting of the vessels, a robust active contour method was utilized to obtain leakage area in a 1500-μm-radius circular region centered at the fovea. Images were captured at different fields of view (FOVs) and were often contaminated with outliers, as is the case in real-world clinical imaging. Our algorithm was applied to these images with no manual input. Separately, all images were manually segmented by two retina specialists. The sensitivity, specificity, and accuracy of manual interobserver, manual intraobserver, and automatic methods were calculated. Results. The mean accuracy was 0.86 ± 0.08 for automatic versus manual, 0.83 ± 0.16 for manual interobserver, and 0.90 ± 0.08 for manual intraobserver segmentation methods. Conclusions. Our fully automated algorithm can reproducibly and accurately quantify the area of leakage of clinical-grade FA video and is congruent with expert manual segmentation. The performance was reliable for different DME subtypes. This approach has the potential to reduce time and labor costs and may yield objective and reproducible quantitative measurements of DME imaging biomarkers. PMID:25634978
Fully automatic segmentation of fluorescein leakage in subjects with diabetic macular edema.
Rabbani, Hossein; Allingham, Michael J; Mettu, Priyatham S; Cousins, Scott W; Farsiu, Sina
2015-01-29
To create and validate software to automatically segment leakage area in real-world clinical fluorescein angiography (FA) images of subjects with diabetic macular edema (DME). Fluorescein angiography images obtained from 24 eyes of 24 subjects with DME were retrospectively analyzed. Both video and still-frame images were obtained using a Heidelberg Spectralis 6-mode HRA/OCT unit. We aligned early and late FA frames in the video by a two-step nonrigid registration method. To remove background artifacts, we subtracted early and late FA frames. Finally, after postprocessing steps, including detection and inpainting of the vessels, a robust active contour method was utilized to obtain leakage area in a 1500-μm-radius circular region centered at the fovea. Images were captured at different fields of view (FOVs) and were often contaminated with outliers, as is the case in real-world clinical imaging. Our algorithm was applied to these images with no manual input. Separately, all images were manually segmented by two retina specialists. The sensitivity, specificity, and accuracy of manual interobserver, manual intraobserver, and automatic methods were calculated. The mean accuracy was 0.86 ± 0.08 for automatic versus manual, 0.83 ± 0.16 for manual interobserver, and 0.90 ± 0.08 for manual intraobserver segmentation methods. Our fully automated algorithm can reproducibly and accurately quantify the area of leakage of clinical-grade FA video and is congruent with expert manual segmentation. The performance was reliable for different DME subtypes. This approach has the potential to reduce time and labor costs and may yield objective and reproducible quantitative measurements of DME imaging biomarkers. Copyright 2015 The Association for Research in Vision and Ophthalmology, Inc.
Visual Analytics and Storytelling through Video
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Pak C.; Perrine, Kenneth A.; Mackey, Patrick S.
2005-10-31
This paper supplements a video clip submitted to the Video Track of IEEE Symposium on Information Visualization 2005. The original video submission applies a two-way storytelling approach to demonstrate the visual analytics capabilities of a new visualization technique. The paper presents our video production philosophy, describes the plot of the video, explains the rationale behind the plot, and finally, shares our production experiences with our readers.
Video indexing based on image and sound
NASA Astrophysics Data System (ADS)
Faudemay, Pascal; Montacie, Claude; Caraty, Marie-Jose
1997-10-01
Video indexing is a major challenge for both scientific and economic reasons. Information extraction can sometimes be easier from sound channel than from image channel. We first present a multi-channel and multi-modal query interface, to query sound, image and script through 'pull' and 'push' queries. We then summarize the segmentation phase, which needs information from the image channel. Detection of critical segments is proposed. It should speed-up both automatic and manual indexing. We then present an overview of the information extraction phase. Information can be extracted from the sound channel, through speaker recognition, vocal dictation with unconstrained vocabularies, and script alignment with speech. We present experiment results for these various techniques. Speaker recognition methods were tested on the TIMIT and NTIMIT database. Vocal dictation as experimented on newspaper sentences spoken by several speakers. Script alignment was tested on part of a carton movie, 'Ivanhoe'. For good quality sound segments, error rates are low enough for use in indexing applications. Major issues are the processing of sound segments with noise or music, and performance improvement through the use of appropriate, low-cost architectures or networks of workstations.
Texture-adaptive hyperspectral video acquisition system with a spatial light modulator
NASA Astrophysics Data System (ADS)
Fang, Xiaojing; Feng, Jiao; Wang, Yongjin
2014-10-01
We present a new hybrid camera system based on spatial light modulator (SLM) to capture texture-adaptive high-resolution hyperspectral video. The hybrid camera system records a hyperspectral video with low spatial resolution using a gray camera and a high-spatial resolution video using a RGB camera. The hyperspectral video is subsampled by the SLM. The subsampled points can be adaptively selected according to the texture characteristic of the scene by combining with digital imaging analysis and computational processing. In this paper, we propose an adaptive sampling method utilizing texture segmentation and wavelet transform (WT). We also demonstrate the effectiveness of the sampled pattern on the SLM with the proposed method.
2016-06-01
and material developers use an online game to crowdsource ideas from online players in order to increase viable synthetic prototypes. In entertainment... games , players often create videos of their game play to share with other players to demonstrate how to complete a segment of a game . This thesis...explores similar self-recorded videos of ESP game play and determines if they provide useful data to capability and material developers that can
Vodcasts and Captures: Using Multimedia to Improve Student Learning in Introductory Biology
ERIC Educational Resources Information Center
Walker, J. D.; Cotner, Sehoya; Beermann, Nicholas
2011-01-01
This study investigated the use of multimedia materials to enhance student learning in a large, introductory biology course. Two sections of this course were taught by the same instructor in the same semester. In one section, video podcasts or "vodcasts" were created which combined custom animation and video segments with music and…
Making History: An Indiana Teacher Uses Technology to Feel the History
ERIC Educational Resources Information Center
Technology & Learning, 2008
2008-01-01
Jon Carl's vision is simple: get students passionate about history by turning them into historians. To accomplish this, he created a class centered on documentary film-making. Students choose a topic, conduct research at local libraries, write a script, film video interviews, and create video segments of four to 15 minutes. District technology…
Selective Set Effects Produced by Television Adjunct in Learning from Text.
ERIC Educational Resources Information Center
Yi, Julie C.
This study used television segments to investigate the impact of multimedia in establishing context for text learning. Adult participants (n=128) were shown a video either before or after reading a story. The video shown before reading was intended to create a "set" for either a burglar or buyer perspective contained in the story. The…
Gradual cut detection using low-level vision for digital video
NASA Astrophysics Data System (ADS)
Lee, Jae-Hyun; Choi, Yeun-Sung; Jang, Ok-bae
1996-09-01
Digital video computing and organization is one of the important issues in multimedia system, signal compression, or database. Video should be segmented into shots to be used for identification and indexing. This approach requires a suitable method to automatically locate cut points in order to separate shot in a video. Automatic cut detection to isolate shots in a video has received considerable attention due to many practical applications; our video database, browsing, authoring system, retrieval and movie. Previous studies are based on a set of difference mechanisms and they measured the content changes between video frames. But they could not detect more special effects which include dissolve, wipe, fade-in, fade-out, and structured flashing. In this paper, a new cut detection method for gradual transition based on computer vision techniques is proposed. And then, experimental results applied to commercial video are presented and evaluated.
Automated fall detection on privacy-enhanced video.
Edgcomb, Alex; Vahid, Frank
2012-01-01
A privacy-enhanced video obscures the appearance of a person in the video. We consider four privacy enhancements: blurring of the person, silhouetting of the person, covering the person with a graphical box, and covering the person with a graphical oval. We demonstrate that an automated video-based fall detection algorithm can be as accurate on privacy-enhanced video as on raw video. The algorithm operated on video from a stationary in-home camera, using a foreground-background segmentation algorithm to extract a minimum bounding rectangle (MBR) around the motion in the video, and using time series shapelet analysis on the height and width of the rectangle to detect falls. We report accuracy applying fall detection on 23 scenarios depicted as raw video and privacy-enhanced videos involving a sole actor portraying normal activities and various falls. We found that fall detection on privacy-enhanced video, except for the common approach of blurring of the person, was competitive with raw video, and in particular that the graphical oval privacy enhancement yielded the same accuracy as raw video, namely 0.91 sensitivity and 0.92 specificity.
A unified framework for gesture recognition and spatiotemporal gesture segmentation.
Alon, Jonathan; Athitsos, Vassilis; Yuan, Quan; Sclaroff, Stan
2009-09-01
Within the context of hand gesture recognition, spatiotemporal gesture segmentation is the task of determining, in a video sequence, where the gesturing hand is located and when the gesture starts and ends. Existing gesture recognition methods typically assume either known spatial segmentation or known temporal segmentation, or both. This paper introduces a unified framework for simultaneously performing spatial segmentation, temporal segmentation, and recognition. In the proposed framework, information flows both bottom-up and top-down. A gesture can be recognized even when the hand location is highly ambiguous and when information about when the gesture begins and ends is unavailable. Thus, the method can be applied to continuous image streams where gestures are performed in front of moving, cluttered backgrounds. The proposed method consists of three novel contributions: a spatiotemporal matching algorithm that can accommodate multiple candidate hand detections in every frame, a classifier-based pruning framework that enables accurate and early rejection of poor matches to gesture models, and a subgesture reasoning algorithm that learns which gesture models can falsely match parts of other longer gestures. The performance of the approach is evaluated on two challenging applications: recognition of hand-signed digits gestured by users wearing short-sleeved shirts, in front of a cluttered background, and retrieval of occurrences of signs of interest in a video database containing continuous, unsegmented signing in American Sign Language (ASL).
Video Games and Children. ERIC Digest.
ERIC Educational Resources Information Center
Cesarone, Bernard
This digest examines data on video game use by children, explains ratings of video game violence, and reviews research on the effects of video games on children and adolescents. A recent study of seventh and eighth graders found that 65% of males and 57% of females played 1 to 6 hours of video games at home per week, and 38% of males and 16% of…
DIY Video Abstracts: Lessons from an ultimately successful experience
NASA Astrophysics Data System (ADS)
Brauman, K. A.
2013-12-01
A great video abstract can come together in as little as two days with only a laptop and a sense of adventure. From script to setup, here are tips to make the process practically pain-free. The content of every abstract is unique, but some pointers for writing a video script are universal. Keeping it short and clarifying the message into 4 or 5 single-issue segments make any video better. Making the video itself can be intimidating, but it doesn't have to be! Practical ideas to be discussed include setting up the script as a narrow column to avoid the appearance of reading and hunting for a colored backdrop. A lot goes into just two minutes of video, but for not too much effort the payoff is tremendous.
NASA Astrophysics Data System (ADS)
Sa, Qila; Wang, Zhihui
2018-03-01
At present, content-based video retrieval (CBVR) is the most mainstream video retrieval method, using the video features of its own to perform automatic identification and retrieval. This method involves a key technology, i.e. shot segmentation. In this paper, the method of automatic video shot boundary detection with K-means clustering and improved adaptive dual threshold comparison is proposed. First, extract the visual features of every frame and divide them into two categories using K-means clustering algorithm, namely, one with significant change and one with no significant change. Then, as to the classification results, utilize the improved adaptive dual threshold comparison method to determine the abrupt as well as gradual shot boundaries.Finally, achieve automatic video shot boundary detection system.
Indexed Captioned Searchable Videos: A Learning Companion for STEM Coursework
NASA Astrophysics Data System (ADS)
Tuna, Tayfun; Subhlok, Jaspal; Barker, Lecia; Shah, Shishir; Johnson, Olin; Hovey, Christopher
2017-02-01
Videos of classroom lectures have proven to be a popular and versatile learning resource. A key shortcoming of the lecture video format is accessing the content of interest hidden in a video. This work meets this challenge with an advanced video framework featuring topical indexing, search, and captioning (ICS videos). Standard optical character recognition (OCR) technology was enhanced with image transformations for extraction of text from video frames to support indexing and search. The images and text on video frames is analyzed to divide lecture videos into topical segments. The ICS video player integrates indexing, search, and captioning in video playback providing instant access to the content of interest. This video framework has been used by more than 70 courses in a variety of STEM disciplines and assessed by more than 4000 students. Results presented from the surveys demonstrate the value of the videos as a learning resource and the role played by videos in a students learning process. Survey results also establish the value of indexing and search features in a video platform for education. This paper reports on the development and evaluation of ICS videos framework and over 5 years of usage experience in several STEM courses.
ESPN2 Sports Figures Makes Math and Physics a Ball! 1996-97 Educator's Curriculum.
ERIC Educational Resources Information Center
Rusczyk, Richard; Lehoczky, Sandor
This guide is designed to accompany ESPN's SportsFigures video segments which were created to enhance the interest and learning progress of high school students in mathematics, physics, and physical science. Using actual, re-enacted, or staged events, the problems presented in each of the 16 Sports Figures segments illustrate the relationship…
Leveraging Automatic Speech Recognition Errors to Detect Challenging Speech Segments in TED Talks
ERIC Educational Resources Information Center
Mirzaei, Maryam Sadat; Meshgi, Kourosh; Kawahara, Tatsuya
2016-01-01
This study investigates the use of Automatic Speech Recognition (ASR) systems to epitomize second language (L2) listeners' problems in perception of TED talks. ASR-generated transcripts of videos often involve recognition errors, which may indicate difficult segments for L2 listeners. This paper aims to discover the root-causes of the ASR errors…
Hey! What's Space Station Freedom?
NASA Technical Reports Server (NTRS)
Vonehrenfried, Dutch
1992-01-01
This video, 'Hey! What's Space Station Freedom?', has been produced as a classroom tool geared toward middle school children. There are three segments to this video. Segment One is a message to teachers presented by Dr. Jeannine Duane, New Jersey, 'Teacher in Space'. Segment Two is a brief Social Studies section and features a series of Presidential Announcements by President John F. Kennedy (May 1961), President Ronald Reagan (July 1982), and President George Bush (July 1989). These historical announcements are speeches concerning the present and future objectives of the United States' space programs. In the last segment, Charlie Walker, former Space Shuttle astronaut, teaches a group of middle school children, through models, computer animation, and actual footage, what Space Station Freedom is, who is involved in its construction, how it is to be built, what each of the modules on the station is for, and how long and in what sequence this construction will occur. There is a brief animation segment where, through the use of cartoons, the children fly up to Space Station Freedom as astronauts, perform several experiments and are given a tour of the station, and fly back to Earth. Space Station Freedom will take four years to build and will have three lab modules, one from ESA and another from Japan, and one habitation module for the astronauts to live in.
Hey] What's Space Station Freedom?
NASA Astrophysics Data System (ADS)
Vonehrenfried, Dutch
This video, 'Hey] What's Space Station Freedom?', has been produced as a classroom tool geared toward middle school children. There are three segments to this video. Segment One is a message to teachers presented by Dr. Jeannine Duane, New Jersey, 'Teacher in Space'. Segment Two is a brief Social Studies section and features a series of Presidential Announcements by President John F. Kennedy (May 1961), President Ronald Reagan (July 1982), and President George Bush (July 1989). These historical announcements are speeches concerning the present and future objectives of the United States' space programs. In the last segment, Charlie Walker, former Space Shuttle astronaut, teaches a group of middle school children, through models, computer animation, and actual footage, what Space Station Freedom is, who is involved in its construction, how it is to be built, what each of the modules on the station is for, and how long and in what sequence this construction will occur. There is a brief animation segment where, through the use of cartoons, the children fly up to Space Station Freedom as astronauts, perform several experiments and are given a tour of the station, and fly back to Earth. Space Station Freedom will take four years to build and will have three lab modules, one from ESA and another from Japan, and one habitation module for the astronauts to live in.
Unsupervised motion-based object segmentation refined by color
NASA Astrophysics Data System (ADS)
Piek, Matthijs C.; Braspenning, Ralph; Varekamp, Chris
2003-06-01
For various applications, such as data compression, structure from motion, medical imaging and video enhancement, there is a need for an algorithm that divides video sequences into independently moving objects. Because our focus is on video enhancement and structure from motion for consumer electronics, we strive for a low complexity solution. For still images, several approaches exist based on colour, but these lack in both speed and segmentation quality. For instance, colour-based watershed algorithms produce a so-called oversegmentation with many segments covering each single physical object. Other colour segmentation approaches exist which somehow limit the number of segments to reduce this oversegmentation problem. However, this often results in inaccurate edges or even missed objects. Most likely, colour is an inherently insufficient cue for real world object segmentation, because real world objects can display complex combinations of colours. For video sequences, however, an additional cue is available, namely the motion of objects. When different objects in a scene have different motion, the motion cue alone is often enough to reliably distinguish objects from one another and the background. However, because of the lack of sufficient resolution of efficient motion estimators, like the 3DRS block matcher, the resulting segmentation is not at pixel resolution, but at block resolution. Existing pixel resolution motion estimators are more sensitive to noise, suffer more from aperture problems or have less correspondence to the true motion of objects when compared to block-based approaches or are too computationally expensive. From its tendency to oversegmentation it is apparent that colour segmentation is particularly effective near edges of homogeneously coloured areas. On the other hand, block-based true motion estimation is particularly effective in heterogeneous areas, because heterogeneous areas improve the chance a block is unique and thus decrease the chance of the wrong position producing a good match. Consequently, a number of methods exist which combine motion and colour segmentation. These methods use colour segmentation as a base for the motion segmentation and estimation or perform an independent colour segmentation in parallel which is in some way combined with the motion segmentation. The presented method uses both techniques to complement each other by first segmenting on motion cues and then refining the segmentation with colour. To our knowledge few methods exist which adopt this approach. One example is te{meshrefine}. This method uses an irregular mesh, which hinders its efficient implementation in consumer electronics devices. Furthermore, the method produces a foreground/background segmentation, while our applications call for the segmentation of multiple objects. NEW METHOD As mentioned above we start with motion segmentation and refine the edges of this segmentation with a pixel resolution colour segmentation method afterwards. There are several reasons for this approach: + Motion segmentation does not produce the oversegmentation which colour segmentation methods normally produce, because objects are more likely to have colour discontinuities than motion discontinuities. In this way, the colour segmentation only has to be done at the edges of segments, confining the colour segmentation to a smaller part of the image. In such a part, it is more likely that the colour of an object is homogeneous. + This approach restricts the computationally expensive pixel resolution colour segmentation to a subset of the image. Together with the very efficient 3DRS motion estimation algorithm, this helps to reduce the computational complexity. + The motion cue alone is often enough to reliably distinguish objects from one another and the background. To obtain the motion vector fields, a variant of the 3DRS block-based motion estimator which analyses three frames of input was used. The 3DRS motion estimator is known for its ability to estimate motion vectors which closely resemble the true motion. BLOCK-BASED MOTION SEGMENTATION As mentioned above we start with a block-resolution segmentation based on motion vectors. The presented method is inspired by the well-known K-means segmentation method te{K-means}. Several other methods (e.g. te{kmeansc}) adapt K-means for connectedness by adding a weighted shape-error. This adds the additional difficulty of finding the correct weights for the shape-parameters. Also, these methods often bias one particular pre-defined shape. The presented method, which we call K-regions, encourages connectedness because only blocks at the edges of segments may be assigned to another segment. This constrains the segmentation method to such a degree that it allows the method to use least squares for the robust fitting of affine motion models for each segment. Contrary to te{parmkm}, the segmentation step still operates on vectors instead of model parameters. To make sure the segmentation is temporally consistent, the segmentation of the previous frame will be used as initialisation for every new frame. We also present a scheme which makes the algorithm independent of the initially chosen amount of segments. COLOUR-BASED INTRA-BLOCK SEGMENTATION The block resolution motion-based segmentation forms the starting point for the pixel resolution segmentation. The pixel resolution segmentation is obtained from the block resolution segmentation by reclassifying pixels only at the edges of clusters. We assume that an edge between two objects can be found in either one of two neighbouring blocks that belong to different clusters. This assumption allows us to do the pixel resolution segmentation on each pair of such neighbouring blocks separately. Because of the local nature of the segmentation, it largely avoids problems with heterogeneously coloured areas. Because no new segments are introduced in this step, it also does not suffer from oversegmentation problems. The presented method has no problems with bifurcations. For the pixel resolution segmentation itself we reclassify pixels such that we optimize an error norm which favour similarly coloured regions and straight edges. SEGMENTATION MEASURE To assist in the evaluation of the proposed algorithm we developed a quality metric. Because the problem does not have an exact specification, we decided to define a ground truth output which we find desirable for a given input. We define the measure for the segmentation quality as being how different the segmentation is from the ground truth. Our measure enables us to evaluate oversegmentation and undersegmentation seperately. Also, it allows us to evaluate which parts of a frame suffer from oversegmentation or undersegmentation. The proposed algorithm has been tested on several typical sequences. CONCLUSIONS In this abstract we presented a new video segmentation method which performs well in the segmentation of multiple independently moving foreground objects from each other and the background. It combines the strong points of both colour and motion segmentation in the way we expected. One of the weak points is that the segmentation method suffers from undersegmentation when adjacent objects display similar motion. In sequences with detailed backgrounds the segmentation will sometimes display noisy edges. Apart from these results, we think that some of the techniques, and in particular the K-regions technique, may be useful for other two-dimensional data segmentation problems.
Creating Micro-Videos to Demonstrate Technology Learning
ERIC Educational Resources Information Center
Frydenberg, Mark; Andone, Diana
2016-01-01
Short videos, also known as micro-videos, have emerged as a platform for sharing ideas, experiences, and life events on online social networks. This paper shares preliminary results of a study involving students from two universities who created six-second videos using the Vine mobile app to explain or illustrate technology concepts. An analysis…
Videos and Animations for Vocabulary Learning: A Study on Difficult Words
ERIC Educational Resources Information Center
Lin, Chih-cheng; Tseng, Yi-fang
2012-01-01
Studies on using still images and dynamic videos in multimedia annotations produced inconclusive results. A further examination, however, showed that the principle of using videos to explain complex concepts was not observed in the previous studies. This study was intended to investigate whether videos, compared with pictures, better assist…
Digital Video (DV): A Primer for Developing an Enterprise Video Strategy
NASA Astrophysics Data System (ADS)
Talovich, Thomas L.
2002-09-01
The purpose of this thesis is to provide an overview of digital video production and delivery. The thesis presents independent research demonstrating the educational value of incorporating video and multimedia content in training and education programs. The thesis explains the fundamental concepts associated with the process of planning, preparing, and publishing video content and assists in the development of follow-on strategies for incorporation of video content into distance training and education programs. The thesis provides an overview of the following technologies: Digital Video, Digital Video Editors, Video Compression, Streaming Video, and Optical Storage Media.
Astrometric and Photometric Analysis of the September 2008 ATV-1 Re-Entry Event
NASA Technical Reports Server (NTRS)
Mulrooney, Mark K.; Barker, Edwin S.; Maley, Paul D.; Beaulieu, Kevin R.; Stokely, Christopher L.
2008-01-01
NASA utilized Image Intensified Video Cameras for ATV data acquisition from a jet flying at 12.8 km. Afterwards the video was digitized and then analyzed with a modified commercial software package, Image Systems Trackeye. Astrometric results were limited by saturation, plate scale, and imposed linear plate solution based on field reference stars. Time-dependent fragment angular trajectories, velocities, accelerations, and luminosities were derived in each video segment. It was evident that individual fragments behave differently. Photometric accuracy was insufficient to confidently assess correlations between luminosity and fragment spatial behavior (velocity, deceleration). Use of high resolution digital video cameras in future should remedy this shortcoming.
Intelligent video storage of visual evidences on site in fast deployment
NASA Astrophysics Data System (ADS)
Desurmont, Xavier; Bastide, Arnaud; Delaigle, Jean-Francois
2004-07-01
In this article we present a generic, flexible, scalable and robust approach for an intelligent real-time forensic visual system. The proposed implementation could be rapidly deployable and integrates minimum logistic support as it embeds low complexity devices (PCs and cameras) that communicate through wireless network. The goal of these advanced tools is to provide intelligent video storage of potential video evidences for fast intervention during deployment around a hazardous sector after a terrorism attack, a disaster, an air crash or before attempt of it. Advanced video analysis tools, such as segmentation and tracking are provided to support intelligent storage and annotation.
MPEG-7 audio-visual indexing test-bed for video retrieval
NASA Astrophysics Data System (ADS)
Gagnon, Langis; Foucher, Samuel; Gouaillier, Valerie; Brun, Christelle; Brousseau, Julie; Boulianne, Gilles; Osterrath, Frederic; Chapdelaine, Claude; Dutrisac, Julie; St-Onge, Francis; Champagne, Benoit; Lu, Xiaojian
2003-12-01
This paper reports on the development status of a Multimedia Asset Management (MAM) test-bed for content-based indexing and retrieval of audio-visual documents within the MPEG-7 standard. The project, called "MPEG-7 Audio-Visual Document Indexing System" (MADIS), specifically targets the indexing and retrieval of video shots and key frames from documentary film archives, based on audio-visual content like face recognition, motion activity, speech recognition and semantic clustering. The MPEG-7/XML encoding of the film database is done off-line. The description decomposition is based on a temporal decomposition into visual segments (shots), key frames and audio/speech sub-segments. The visible outcome will be a web site that allows video retrieval using a proprietary XQuery-based search engine and accessible to members at the Canadian National Film Board (NFB) Cineroute site. For example, end-user will be able to ask to point on movie shots in the database that have been produced in a specific year, that contain the face of a specific actor who tells a specific word and in which there is no motion activity. Video streaming is performed over the high bandwidth CA*net network deployed by CANARIE, a public Canadian Internet development organization.
Grayscale image segmentation for real-time traffic sign recognition: the hardware point of view
NASA Astrophysics Data System (ADS)
Cao, Tam P.; Deng, Guang; Elton, Darrell
2009-02-01
In this paper, we study several grayscale-based image segmentation methods for real-time road sign recognition applications on an FPGA hardware platform. The performance of different image segmentation algorithms in different lighting conditions are initially compared using PC simulation. Based on these results and analysis, suitable algorithms are implemented and tested on a real-time FPGA speed sign detection system. Experimental results show that the system using segmented images uses significantly less hardware resources on an FPGA while maintaining comparable system's performance. The system is capable of processing 60 live video frames per second.
Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard
2013-01-01
Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.
Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard
2013-01-01
Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology “out of the lab” to real-world, diverse data. In this contribution, we address the problem of finding “disturbing” scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis. PMID:24391704
Hierarchical vs non-hierarchical audio indexation and classification for video genres
NASA Astrophysics Data System (ADS)
Dammak, Nouha; BenAyed, Yassine
2018-04-01
In this paper, Support Vector Machines (SVMs) are used for segmenting and indexing video genres based on only audio features extracted at block level, which has a prominent asset by capturing local temporal information. The main contribution of our study is to show the wide effect on the classification accuracies while using an hierarchical categorization structure based on Mel Frequency Cepstral Coefficients (MFCC) audio descriptor. In fact, the classification consists in three common video genres: sports videos, music clips and news scenes. The sub-classification may divide each genre into several multi-speaker and multi-dialect sub-genres. The validation of this approach was carried out on over 360 minutes of video span yielding a classification accuracy of over 99%.
Automatic colonic lesion detection and tracking in endoscopic videos
NASA Astrophysics Data System (ADS)
Li, Wenjing; Gustafsson, Ulf; A-Rahim, Yoursif
2011-03-01
The biology of colorectal cancer offers an opportunity for both early detection and prevention. Compared with other imaging modalities, optical colonoscopy is the procedure of choice for simultaneous detection and removal of colonic polyps. Computer assisted screening makes it possible to assist physicians and potentially improve the accuracy of the diagnostic decision during the exam. This paper presents an unsupervised method to detect and track colonic lesions in endoscopic videos. The aim of the lesion screening and tracking is to facilitate detection of polyps and abnormal mucosa in real time as the physician is performing the procedure. For colonic lesion detection, the conventional marker controlled watershed based segmentation is used to segment the colonic lesions, followed by an adaptive ellipse fitting strategy to further validate the shape. For colonic lesion tracking, a mean shift tracker with background modeling is used to track the target region from the detection phase. The approach has been tested on colonoscopy videos acquired during regular colonoscopic procedures and demonstrated promising results.
An Effective Profile Based Video Browsing System for e-Learning
ERIC Educational Resources Information Center
Premaratne, S. C.; Karunaratna, D. D.; Hewagamage, K. P.
2007-01-01
E-learning has acquired a prime place in many discussions recently. A number of research efforts around the world are trying to enhance education and training through improving e-learning facilities. This paper briefly explains one such attempt aimed at designing a system to support video clips in e-learning and explains how profiles of the…
NASA Astrophysics Data System (ADS)
Kaur, Berinderjeet; Tay, Eng Guan; Toh, Tin Lam; Leong, Yew Hoong; Lee, Ngan Hoe
2018-03-01
A study of school mathematics curriculum enacted by competent teachers in Singapore secondary schools is a programmatic research project at the National Institute of Education (NIE) funded by the Ministry of Education (MOE) in Singapore through the Office of Education Research (OER) at NIE. The main goal of the project is to collect a set of data that would be used by two studies to research the enacted secondary school mathematics curriculum. The project aims to examine how competent experienced secondary school teachers implement the designated curriculum prescribed by the MOE in the 2013 revision of curriculum. It does this firstly by examining the video recordings of the classroom instruction and interactions between secondary school mathematics teachers and their students, as it is these interactions that fundamentally determine the nature of the actual mathematics learning and teaching that take place in the classroom. It also examines content through the instructional materials used—their preparation, use in classroom and as homework. The project comprises a video segment and a survey segment. Approximately 630 secondary mathematics teachers and 600 students are participating in the project. The data collection for the video segment of the project is guided by the renowned complementary accounts methodology while the survey segment adopts a self-report questionnaire approach. The findings of the project will serve several purposes. They will provide timely feedback to mathematics specialists in the MOE, inform pre-service and professional development programmes for mathematics teachers at the NIE and contribute towards articulation of "Mathematics pedagogy in Singapore secondary schools" that is evidence based.
Japanese migration in contemporary Japan: economic segmentation and interprefectural migration.
Fukurai, H
1991-01-01
This paper examines the economic segmentation model in explaining 1985-86 Japanese interregional migration. The analysis takes advantage of statistical graphic techniques to illustrate the following substantive issues of interregional migration: (1) to examine whether economic segmentation significantly influences Japanese regional migration and (2) to explain socioeconomic characteristics of prefectures for both in- and out-migration. Analytic techniques include a latent structural equation (LISREL) methodology and statistical residual mapping. The residual dispersion patterns, for instance, suggest the extent to which socioeconomic and geopolitical variables explain migration differences by showing unique clusters of unexplained residuals. The analysis further points out that extraneous factors such as high residential land values, significant commuting populations, and regional-specific cultures and traditions need to be incorporated in the economic segmentation model in order to assess the extent of the model's reliability in explaining the pattern of interprefectural migration.
Two novel motion-based algorithms for surveillance video analysis on embedded platforms
NASA Astrophysics Data System (ADS)
Vijverberg, Julien A.; Loomans, Marijn J. H.; Koeleman, Cornelis J.; de With, Peter H. N.
2010-05-01
This paper proposes two novel motion-vector based techniques for target detection and target tracking in surveillance videos. The algorithms are designed to operate on a resource-constrained device, such as a surveillance camera, and to reuse the motion vectors generated by the video encoder. The first novel algorithm for target detection uses motion vectors to construct a consistent motion mask, which is combined with a simple background segmentation technique to obtain a segmentation mask. The second proposed algorithm aims at multi-target tracking and uses motion vectors to assign blocks to targets employing five features. The weights of these features are adapted based on the interaction between targets. These algorithms are combined in one complete analysis application. The performance of this application for target detection has been evaluated for the i-LIDS sterile zone dataset and achieves an F1-score of 0.40-0.69. The performance of the analysis algorithm for multi-target tracking has been evaluated using the CAVIAR dataset and achieves an MOTP of around 9.7 and MOTA of 0.17-0.25. On a selection of targets in videos from other datasets, the achieved MOTP and MOTA are 8.8-10.5 and 0.32-0.49 respectively. The execution time on a PC-based platform is 36 ms. This includes the 20 ms for generating motion vectors, which are also required by the video encoder.
Diavideos: a diabetes health video portal.
Sánchez-Bocanegra, C L; Rivero-Rodriguez, A; Fernández-Luque, L; Sevillano, J L
2013-01-01
Diavideos is a web platform that collects trustworthy diabetes health videos from YouTube and offers them in a easy way. YouTube is a big repository of health videos, but good content is sometimes mixed with misleading and harmful videos such as promoting anorexia [1]. Diavideos is a web portal that provides easy access to a repository of trustworthy diabetes videos. This poster describes Diavideos and explains the crawling method used to retrieve these videos from trusted channels.
Video repairing under variable illumination using cyclic motions.
Jia, Jiaya; Tai, Yu-Wing; Wu, Tai-Pang; Tang, Chi-Keung
2006-05-01
This paper presents a complete system capable of synthesizing a large number of pixels that are missing due to occlusion or damage in an uncalibrated input video. These missing pixels may correspond to the static background or cyclic motions of the captured scene. Our system employs user-assisted video layer segmentation, while the main processing in video repair is fully automatic. The input video is first decomposed into the color and illumination videos. The necessary temporal consistency is maintained by tensor voting in the spatio-temporal domain. Missing colors and illumination of the background are synthesized by applying image repairing. Finally, the occluded motions are inferred by spatio-temporal alignment of collected samples at multiple scales. We experimented on our system with some difficult examples with variable illumination, where the capturing camera can be stationary or in motion.
ERIC Educational Resources Information Center
Stevens, Reed; Hall, Rogers
1997-01-01
Reports on an exploratory study of how people see and explain a prominent exhibit (Tornado) at an interactive science museum (the Exploratorium). Data was assembled using a novel, technically mediated activity system (Video Traces). Argues that Video Traces is an effective tool and discusses an expanded Video Traces system. (Author/DKM)
Creating Micro-Videos to Demonstrate Technology Learning and Digital Literacy
ERIC Educational Resources Information Center
Frydenberg, Mark; Andone, Diana
2016-01-01
Purpose: Short videos, also known as micro-videos, have emerged as a platform for sharing ideas, experiences and life events via online social networks. This paper aims to share preliminary results of a study, involving students from two universities who created six-second videos using the Vine mobile app to explain or illustrate technological…
Geoscience Videos and Their Role in Supporting Student Learning
ERIC Educational Resources Information Center
Wiggen, Jennifer; McDonnell, David
2017-01-01
A series of short (5 to 7 minutes long) geoscience videos were created to support student learning in a flipped class setting for an introductory geology class at North Carolina State University. Videos were made using a stylus, tablet, microphone, and video editing software. Essentially, we narrate a slide, sketch a diagram, or explain a figure…
Activity-based exploitation of Full Motion Video (FMV)
NASA Astrophysics Data System (ADS)
Kant, Shashi
2012-06-01
Video has been a game-changer in how US forces are able to find, track and defeat its adversaries. With millions of minutes of video being generated from an increasing number of sensor platforms, the DOD has stated that the rapid increase in video is overwhelming their analysts. The manpower required to view and garner useable information from the flood of video is unaffordable, especially in light of current fiscal restraints. "Search" within full-motion video has traditionally relied on human tagging of content, and video metadata, to provision filtering and locate segments of interest, in the context of analyst query. Our approach utilizes a novel machine-vision based approach to index FMV, using object recognition & tracking, events and activities detection. This approach enables FMV exploitation in real-time, as well as a forensic look-back within archives. This approach can help get the most information out of video sensor collection, help focus the attention of overburdened analysts form connections in activity over time and conserve national fiscal resources in exploiting FMV.
Extraction and analysis of neuron firing signals from deep cortical video microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerekes, Ryan A; Blundon, Jay
We introduce a method for extracting and analyzing neuronal activity time signals from video of the cortex of a live animal. The signals correspond to the firing activity of individual cortical neurons. Activity signals are based on the changing fluorescence of calcium indicators in the cells over time. We propose a cell segmentation method that relies on a user-specified center point, from which the signal extraction method proceeds. A stabilization approach is used to reduce tissue motion in the video. The extracted signal is then processed to flatten the baseline and detect action potentials. We show results from applying themore » method to a cortical video of a live mouse.« less
Beyond a Fad: Why Video Games Should Be Part of 21st Century Libraries
ERIC Educational Resources Information Center
Buchanan, Kym; Elzen, Angela M. Vanden
2012-01-01
We believe video games have a place in libraries. We start by describing two provocative video games. Next, we offer a framework for the general mission of libraries, including access, motivation, and guidance. As a medium, video games have some distinguishing traits: they are visual, interactive, and based on simulations. We explain how these…
Automatic and quantitative measurement of laryngeal video stroboscopic images.
Kuo, Chung-Feng Jeffrey; Kuo, Joseph; Hsiao, Shang-Wun; Lee, Chi-Lung; Lee, Jih-Chin; Ke, Bo-Han
2017-01-01
The laryngeal video stroboscope is an important instrument for physicians to analyze abnormalities and diseases in the glottal area. Stroboscope has been widely used around the world. However, without quantized indices, physicians can only make subjective judgment on glottal images. We designed a new laser projection marking module and applied it onto the laryngeal video stroboscope to provide scale conversion reference parameters for glottal imaging and to convert the physiological parameters of glottis. Image processing technology was used to segment the important image regions of interest. Information of the glottis was quantified, and the vocal fold image segmentation system was completed to assist clinical diagnosis and increase accuracy. Regarding image processing, histogram equalization was used to enhance glottis image contrast. The center weighted median filters image noise while retaining the texture of the glottal image. Statistical threshold determination was used for automatic segmentation of a glottal image. As the glottis image contains saliva and light spots, which are classified as the noise of the image, noise was eliminated by erosion, expansion, disconnection, and closure techniques to highlight the vocal area. We also used image processing to automatically identify an image of vocal fold region in order to quantify information from the glottal image, such as glottal area, vocal fold perimeter, vocal fold length, glottal width, and vocal fold angle. The quantized glottis image database was created to assist physicians in diagnosing glottis diseases more objectively.
Seeing and Doing Science--With Video.
ERIC Educational Resources Information Center
Berger, Michelle Abel
1994-01-01
The article presents a video-based unit on camouflage for students in grades K-5, explaining how to make the classroom VCR a dynamic teaching tool. Information is offered on introducing the unit, active viewing strategies, and follow-up activities. Tips for teaching with video are included. (SM)
MPEG-4 ASP SoC receiver with novel image enhancement techniques for DAB networks
NASA Astrophysics Data System (ADS)
Barreto, D.; Quintana, A.; García, L.; Callicó, G. M.; Núñez, A.
2007-05-01
This paper presents a system for real-time video reception in low-power mobile devices using Digital Audio Broadcast (DAB) technology for transmission. A demo receiver terminal is designed into a FPGA platform using the Advanced Simple Profile (ASP) MPEG-4 standard for video decoding. In order to keep the demanding DAB requirements, the bandwidth of the encoded sequence must be drastically reduced. In this sense, prior to the MPEG-4 coding stage, a pre-processing stage is performed. It is firstly composed by a segmentation phase according to motion and texture based on the Principal Component Analysis (PCA) of the input video sequence, and secondly by a down-sampling phase, which depends on the segmentation results. As a result of the segmentation task, a set of texture and motion maps are obtained. These motion and texture maps are also included into the bit-stream as user data side-information and are therefore known to the receiver. For all bit-rates, the whole encoder/decoder system proposed in this paper exhibits higher image visual quality than the alternative encoding/decoding method, assuming equal image sizes. A complete analysis of both techniques has also been performed to provide the optimum motion and texture maps for the global system, which has been finally validated for a variety of video sequences. Additionally, an optimal HW/SW partition for the MPEG-4 decoder has been studied and implemented over a Programmable Logic Device with an embedded ARM9 processor. Simulation results show that a throughput of 15 QCIF frames per second can be achieved with low area and low power implementation.
2003-05-01
Students at Williams Technology Middle School in Huntsville were featured in a new segment of NASA CONNECT, a video series aimed to enhance the teaching of math, science, and technology to middle school students. The segment premiered nationwide May 15, 2003, and helped viewers understand Sir Isaac Newton's first, second, and third laws of gravity and how they relate to NASA's efforts in developing the next generation of space transportation.
Jersey number detection in sports video for athlete identification
NASA Astrophysics Data System (ADS)
Ye, Qixiang; Huang, Qingming; Jiang, Shuqiang; Liu, Yang; Gao, Wen
2005-07-01
Athlete identification is important for sport video content analysis since users often care about the video clips with their preferred athletes. In this paper, we propose a method for athlete identification by combing the segmentation, tracking and recognition procedures into a coarse-to-fine scheme for jersey number (digital characters on sport shirt) detection. Firstly, image segmentation is employed to separate the jersey number regions with its background. And size/pipe-like attributes of digital characters are used to filter out candidates. Then, a K-NN (K nearest neighbor) classifier is employed to classify a candidate into a digit in "0-9" or negative. In the recognition procedure, we use the Zernike moment features, which are invariant to rotation and scale for digital shape recognition. Synthetic training samples with different fonts are used to represent the pattern of digital characters with non-rigid deformation. Once a character candidate is detected, a SSD (smallest square distance)-based tracking procedure is started. The recognition procedure is performed every several frames in the tracking process. After tracking tens of frames, the overall recognition results are combined to determine if a candidate is a true jersey number or not by a voting procedure. Experiments on several types of sports video shows encouraging result.
Motion-seeded object-based attention for dynamic visual imagery
NASA Astrophysics Data System (ADS)
Huber, David J.; Khosla, Deepak; Kim, Kyungnam
2017-05-01
This paper† describes a novel system that finds and segments "objects of interest" from dynamic imagery (video) that (1) processes each frame using an advanced motion algorithm that pulls out regions that exhibit anomalous motion, and (2) extracts the boundary of each object of interest using a biologically-inspired segmentation algorithm based on feature contours. The system uses a series of modular, parallel algorithms, which allows many complicated operations to be carried out by the system in a very short time, and can be used as a front-end to a larger system that includes object recognition and scene understanding modules. Using this method, we show 90% accuracy with fewer than 0.1 false positives per frame of video, which represents a significant improvement over detection using a baseline attention algorithm.
Video Measurements: Quantity or Quality
ERIC Educational Resources Information Center
Zajkov, Oliver; Mitrevski, Boce
2012-01-01
Students have problems with understanding, using and interpreting graphs. In order to improve the students' skills for working with graphs, we propose Manual Video Measurement (MVM). In this paper, the MVM method is explained and its accuracy is tested. The comparison with the standardized video data software shows that its accuracy is comparable…
Learning from Video Modeling Examples: Does Gender Matter?
ERIC Educational Resources Information Center
Hoogerheide, Vincent; Loyens, Sofie M. M.; van Gog, Tamara
2016-01-01
Online learning from video modeling examples, in which a human model demonstrates and explains how to perform a learning task, is an effective instructional method that is increasingly used nowadays. However, model characteristics such as gender tend to differ across videos, and the model-observer similarity hypothesis suggests that such…
ERIC Educational Resources Information Center
Chen, Ching-chih
1991-01-01
Describes compact disc interactive (CD-I) as a multimedia home entertainment system that combines audio, visual, text, graphic, and interactive capabilities. Full-screen video and full-motion video (FMV) are explained, hardware for FMV decoding is described, software is briefly discussed, and CD-I titles planned for future production are listed.…
Developing assessment system for wireless capsule endoscopy videos based on event detection
NASA Astrophysics Data System (ADS)
Chen, Ying-ju; Yasen, Wisam; Lee, Jeongkyu; Lee, Dongha; Kim, Yongho
2009-02-01
Along with the advancing of technology in wireless and miniature camera, Wireless Capsule Endoscopy (WCE), the combination of both, enables a physician to diagnose patient's digestive system without actually perform a surgical procedure. Although WCE is a technical breakthrough that allows physicians to visualize the entire small bowel noninvasively, the video viewing time takes 1 - 2 hours. This is very time consuming for the gastroenterologist. Not only it sets a limit on the wide application of this technology but also it incurs considerable amount of cost. Therefore, it is important to automate such process so that the medical clinicians only focus on interested events. As an extension from our previous work that characterizes the motility of digestive tract in WCE videos, we propose a new assessment system for energy based events detection (EG-EBD) to classify the events in WCE videos. For the system, we first extract general features of a WCE video that can characterize the intestinal contractions in digestive organs. Then, the event boundaries are identified by using High Frequency Content (HFC) function. The segments are classified into WCE event by special features. In this system, we focus on entering duodenum, entering cecum, and active bleeding. This assessment system can be easily extended to discover more WCE events, such as detailed organ segmentation and more diseases, by using new special features. In addition, the system provides a score for every WCE image for each event. Using the event scores, the system helps a specialist to speedup the diagnosis process.
Human visual system-based smoking event detection
NASA Astrophysics Data System (ADS)
Odetallah, Amjad D.; Agaian, Sos S.
2012-06-01
Human action (e.g. smoking, eating, and phoning) analysis is an important task in various application domains like video surveillance, video retrieval, human-computer interaction systems, and so on. Smoke detection is a crucial task in many video surveillance applications and could have a great impact to raise the level of safety of urban areas, public parks, airplanes, hospitals, schools and others. The detection task is challenging since there is no prior knowledge about the object's shape, texture and color. In addition, its visual features will change under different lighting and weather conditions. This paper presents a new scheme of a system for detecting human smoking events, or small smoke, in a sequence of images. In developed system, motion detection and background subtraction are combined with motion-region-saving, skin-based image segmentation, and smoke-based image segmentation to capture potential smoke regions which are further analyzed to decide on the occurrence of smoking events. Experimental results show the effectiveness of the proposed approach. As well, the developed method is capable of detecting the small smoking events of uncertain actions with various cigarette sizes, colors, and shapes.
Did You Know? Video Series - SEER Cancer Statistics
Videos that explain cancer statistics. Choose from topics including survival, statistics overview, survivorship, disparities, and specific cancer types including breast, lung, colorectal, prostate, melanoma of the skin, and others.
Query by example video based on fuzzy c-means initialized by fixed clustering center
NASA Astrophysics Data System (ADS)
Hou, Sujuan; Zhou, Shangbo; Siddique, Muhammad Abubakar
2012-04-01
Currently, the high complexity of video contents has posed the following major challenges for fast retrieval: (1) efficient similarity measurements, and (2) efficient indexing on the compact representations. A video-retrieval strategy based on fuzzy c-means (FCM) is presented for querying by example. Initially, the query video is segmented and represented by a set of shots, each shot can be represented by a key frame, and then we used video processing techniques to find visual cues to represent the key frame. Next, because the FCM algorithm is sensitive to the initializations, here we initialized the cluster center by the shots of query video so that users could achieve appropriate convergence. After an FCM cluster was initialized by the query video, each shot of query video was considered a benchmark point in the aforesaid cluster, and each shot in the database possessed a class label. The similarity between the shots in the database with the same class label and benchmark point can be transformed into the distance between them. Finally, the similarity between the query video and the video in database was transformed into the number of similar shots. Our experimental results demonstrated the performance of this proposed approach.
Huntsville Area Students Appear in Episode of NASA CONNECT
NASA Technical Reports Server (NTRS)
2003-01-01
Students at Williams Technology Middle School in Huntsville were featured in a new segment of NASA CONNECT, a video series aimed to enhance the teaching of math, science, and technology to middle school students. The segment premiered nationwide May 15, 2003, and helped viewers understand Sir Isaac Newton's first, second, and third laws of gravity and how they relate to NASA's efforts in developing the next generation of space transportation.
Popova, I I; Orlov, O I; Matsnev, E I; Revyakin, Yu G
2016-01-01
The paper reports the results of testing some diagnostic video systems enabling digital rendering of TNT teeth and jaws. The authors substantiate the criteria of choosing and integration of imaging systems in future on Russian segment of the International space station kit LOR developed for examination and download of high-quality images of cosmonauts' TNT, parodentium and teeth.
Arbelle, Assaf; Reyes, Jose; Chen, Jia-Yun; Lahav, Galit; Riklin Raviv, Tammy
2018-04-22
We present a novel computational framework for the analysis of high-throughput microscopy videos of living cells. The proposed framework is generally useful and can be applied to different datasets acquired in a variety of laboratory settings. This is accomplished by tying together two fundamental aspects of cell lineage construction, namely cell segmentation and tracking, via a Bayesian inference of dynamic models. In contrast to most existing approaches, which aim to be general, no assumption of cell shape is made. Spatial, temporal, and cross-sectional variation of the analysed data are accommodated by two key contributions. First, time series analysis is exploited to estimate the temporal cell shape uncertainty in addition to cell trajectory. Second, a fast marching (FM) algorithm is used to integrate the inferred cell properties with the observed image measurements in order to obtain image likelihood for cell segmentation, and association. The proposed approach has been tested on eight different time-lapse microscopy data sets, some of which are high-throughput, demonstrating promising results for the detection, segmentation and association of planar cells. Our results surpass the state of the art for the Fluo-C2DL-MSC data set of the Cell Tracking Challenge (Maška et al., 2014). Copyright © 2018 Elsevier B.V. All rights reserved.
Quantum Electrodynamics: Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lincoln, Don
The Standard Model of particle physics is composed of several theories that are added together. The most precise component theory is the theory of quantum electrodynamics or QED. In this video, Fermilab’s Dr. Don Lincoln explains how theoretical QED calculations can be done. This video links to other videos, giving the viewer a deep understanding of the process.
The Short Life and Ignominious Death of ALA Video and Special Projects.
ERIC Educational Resources Information Center
Handman, Gary
1991-01-01
Discussion of videocassettes in our culture and the function of video collections in libraries focuses on the creation and demise of a unit sponsored by the American Library Association, the ALA Video and Special Projects. The unit's role is discussed and funding decisions that led to its demise are explained. (LRW)
Efficient Lane Boundary Detection with Spatial-Temporal Knowledge Filtering
Nan, Zhixiong; Wei, Ping; Xu, Linhai; Zheng, Nanning
2016-01-01
Lane boundary detection technology has progressed rapidly over the past few decades. However, many challenges that often lead to lane detection unavailability remain to be solved. In this paper, we propose a spatial-temporal knowledge filtering model to detect lane boundaries in videos. To address the challenges of structure variation, large noise and complex illumination, this model incorporates prior spatial-temporal knowledge with lane appearance features to jointly identify lane boundaries. The model first extracts line segments in video frames. Two novel filters—the Crossing Point Filter (CPF) and the Structure Triangle Filter (STF)—are proposed to filter out the noisy line segments. The two filters introduce spatial structure constraints and temporal location constraints into lane detection, which represent the spatial-temporal knowledge about lanes. A straight line or curve model determined by a state machine is used to fit the line segments to finally output the lane boundaries. We collected a challenging realistic traffic scene dataset. The experimental results on this dataset and other standard dataset demonstrate the strength of our method. The proposed method has been successfully applied to our autonomous experimental vehicle. PMID:27529248
Intuitive color-based visualization of multimedia content as large graphs
NASA Astrophysics Data System (ADS)
Delest, Maylis; Don, Anthony; Benois-Pineau, Jenny
2004-06-01
Data visualization techniques are penetrating in various technological areas. In the field of multimedia such as information search and retrieval in multimedia archives, or digital media production and post-production, data visualization methodologies based on large graphs give an exciting alternative to conventional storyboard visualization. In this paper we develop a new approach to visualization of multimedia (video) documents based both on large graph clustering and preliminary video segmenting and indexing.
NASA Astrophysics Data System (ADS)
Al Hadhrami, Tawfik; Wang, Qi; Grecos, Christos
2012-06-01
When natural disasters or other large-scale incidents occur, obtaining accurate and timely information on the developing situation is vital to effective disaster recovery operations. High-quality video streams and high-resolution images, if available in real time, would provide an invaluable source of current situation reports to the incident management team. Meanwhile, a disaster often causes significant damage to the communications infrastructure. Therefore, another essential requirement for disaster management is the ability to rapidly deploy a flexible incident area communication network. Such a network would facilitate the transmission of real-time video streams and still images from the disrupted area to remote command and control locations. In this paper, a comprehensive end-to-end video/image transmission system between an incident area and a remote control centre is proposed and implemented, and its performance is experimentally investigated. In this study a hybrid multi-segment communication network is designed that seamlessly integrates terrestrial wireless mesh networks (WMNs), distributed wireless visual sensor networks, an airborne platform with video camera balloons, and a Digital Video Broadcasting- Satellite (DVB-S) system. By carefully integrating all of these rapidly deployable, interworking and collaborative networking technologies, we can fully exploit the joint benefits provided by WMNs, WSNs, balloon camera networks and DVB-S for real-time video streaming and image delivery in emergency situations among the disaster hit area, the remote control centre and the rescue teams in the field. The whole proposed system is implemented in a proven simulator. Through extensive simulations, the real-time visual communication performance of this integrated system has been numerically evaluated, towards a more in-depth understanding in supporting high-quality visual communications in such a demanding context.
Changes of cerebral current source by audiovisual erotic stimuli in premature ejaculation patients.
Hyun, Jae-Seog; Kam, Sung-Chul; Kwon, Oh-Young
2008-06-01
Premature ejaculation (PE) is one of the most common forms of male sexual dysfunction. The mechanisms of PE remain poorly understood, despite its high prevalence. To investigate the pathophysiology and causes of PE in the central nervous system, we tried to observe the changes in brain current source distribution by audiovisual induction of sexual arousal. Electroencephalograpies were recorded in patients with PE (45.0 +/- 10.3 years old, N = 18) and in controls (45.6 +/- 9.8 years old, N = 18) during four 10-minute segments of resting, watching a music video excerpt, resting, and watching an erotic video excerpt. Five artifact-free 5-second segments were used to obtain cross-spectral low-resolution brain electromagnetic tomography (LORETA) images. Statistical nonparametric maps (SnPM) were obtained to detect the current density changes of six frequency bands between the erotic video session and the music video session in each group. Comparisons were also made between the two groups in the erotic video session. In the SnPM of each spectrum in patients with PE, the current source density of the alpha band was significantly reduced in the right precentral gyrus, the right insula, and both superior parietal lobules (P < 0.01). Comparing the two groups in the erotic video session, the current densities of the beta-2 and -3 bands in the PE group were significantly decreased in the right parahippocampal gyrus and left middle temporal gyrus (P < 0.01). Neuronal activity in the right precental gyrus, the right insula, both the superior parietal lobule, the right parahippocampal gyrus, and the left middle temporal gyrus may be decreased in PE patients upon sexual arousal. Further studies are needed to evaluate the meaning of decreased neuronal activities in PE patients.
Schroeder, Carsten; Chung, Jane M; Mackall, Judith A; Cakulev, Ivan T; Patel, Aaron; Patel, Sunny J; Hoit, Brian D; Sahadevan, Jayakumar
2018-06-14
The aim of the study was to study the feasibility, safety, and efficacy of transesophageal echocardiography-guided intraoperative left ventricular lead placement via a video-assisted thoracoscopic surgery approach in patients with failed conventional biventricular pacing. Twelve patients who could not have the left ventricular lead placed conventionally underwent epicardial left ventricular lead placement by video-assisted thoracoscopic surgery. Eight patients had previous chest surgery (66%). Operative positioning was a modified far lateral supine exposure with 30-degree bed tilt, allowing for groin and sternal access. To determine the optimal left ventricular location for lead placement, the left ventricular surface was divided arbitrarily into nine segments. These segments were transpericardially paced using a hand-held malleable pacing probe identifying the optimal site verified by transesophageal echocardiography. The pacing leads were screwed into position via a limited pericardiotomy. The video-assisted thoracoscopic surgery approach was successful in all patients. Biventricular pacing was achieved in all patients and all reported symptomatic benefit with reduction in New York Heart Association class from III to I-II (P = 0.016). Baseline ejection fraction was 23 ± 3%; within 1-year follow-up, the ejection fraction increased to 32 ± 10% (P = 0.05). The mean follow-up was 566 days. The median length of hospital stay was 7 days with chest tube removal between postoperative days 2 and 5. In patients who are nonresponders to conventional biventricular pacing, intraoperative left ventricular lead placement using anatomical and functional characteristics via a video-assisted thoracoscopic surgery approach is effective in improving heart failure symptoms. This optimized left ventricular lead placement is feasible and safe. Previous chest surgery is no longer an exclusion criterion for a video-assisted thoracoscopic surgery approach.
Brain activity and desire for internet video game play
Han, Doug Hyun; Bolo, Nicolas; Daniels, Melissa A.; Arenella, Lynn; Lyoo, In Kyoon; Renshaw, Perry F.
2010-01-01
Objective Recent studies have suggested that the brain circuitry mediating cue induced desire for video games is similar to that elicited by cues related to drugs and alcohol. We hypothesized that desire for internet video games during cue presentation would activate similar brain regions to those which have been linked with craving for drugs or pathological gambling. Methods This study involved the acquisition of diagnostic MRI and fMRI data from 19 healthy male adults (ages 18–23 years) following training and a standardized 10-day period of game play with a specified novel internet video game, “War Rock” (K-network®). Using segments of videotape consisting of five contiguous 90-second segments of alternating resting, matched control and video game-related scenes, desire to play the game was assessed using a seven point visual analogue scale before and after presentation of the videotape. Results In responding to internet video game stimuli, compared to neutral control stimuli, significantly greater activity was identified in left inferior frontal gyrus, left parahippocampal gyrus, right and left parietal lobe, right and left thalamus, and right cerebellum (FDR <0.05, p<0.009243). Self-reported desire was positively correlated with the beta values of left inferior frontal gyrus, left parahippocampal gyrus, and right and left thalamus. Compared to the general players, members who played more internet video game (MIGP) cohort showed significantly greater activity in right medial frontal lobe, right and left frontal pre-central gyrus, right parietal post-central gyrus, right parahippocampal gyrus, and left parietal precuneus gyrus. Controlling for total game time, reported desire for the internet video game in the MIGP cohort was positively correlated with activation in right medial frontal lobe and right parahippocampal gyrus. Discussion The present findings suggest that cue-induced activation to internet video game stimuli may be similar to that observed during cue presentation in persons with substance dependence or pathological gambling. In particular, cues appear to commonly elicit activity in the dorsolateral prefrontal, orbitofrontal cortex, parahippocampal gyrus, and thalamus. PMID:21220070
Aquatic Toxic Analysis by Monitoring Fish Behavior Using Computer Vision: A Recent Progress
Fu, Longwen; Liu, Zuoyi
2018-01-01
Video tracking based biological early warning system achieved a great progress with advanced computer vision and machine learning methods. Ability of video tracking of multiple biological organisms has been largely improved in recent years. Video based behavioral monitoring has become a common tool for acquiring quantified behavioral data for aquatic risk assessment. Investigation of behavioral responses under chemical and environmental stress has been boosted by rapidly developed machine learning and artificial intelligence. In this paper, we introduce the fundamental of video tracking and present the pioneer works in precise tracking of a group of individuals in 2D and 3D space. Technical and practical issues suffered in video tracking are explained. Subsequently, the toxic analysis based on fish behavioral data is summarized. Frequently used computational methods and machine learning are explained with their applications in aquatic toxicity detection and abnormal pattern analysis. Finally, advantages of recent developed deep learning approach in toxic prediction are presented. PMID:29849612
International Space Station (ISS)
2000-12-04
This video still depicts the recently deployed starboard and port solar arrays towering over the International Space Station (ISS). The video was recorded on STS-97's 65th orbit. Delivery, assembly, and activation of the solar arrays was the main mission objective of STS-97. The electrical power system, which is built into a 73-meter (240-foot) long solar array structure consists of solar arrays, radiators, batteries, and electronics, and will provide the power necessary for the first ISS crews to live and work in the U.S. segment. The entire 15.4-metric ton (17-ton) package is called the P6 Integrated Truss Segment, and is the heaviest and largest element yet delivered to the station aboard a space shuttle. The STS-97 crew of five launched aboard the Space Shuttle Orbiter Endeavor on November 30, 2000 for an 11 day mission.
Video shot boundary detection using region-growing-based watershed method
NASA Astrophysics Data System (ADS)
Wang, Jinsong; Patel, Nilesh; Grosky, William
2004-10-01
In this paper, a novel shot boundary detection approach is presented, based on the popular region growing segmentation method - Watershed segmentation. In image processing, gray-scale pictures could be considered as topographic reliefs, in which the numerical value of each pixel of a given image represents the elevation at that point. Watershed method segments images by filling up basins with water starting at local minima, and at points where water coming from different basins meet, dams are built. In our method, each frame in the video sequences is first transformed from the feature space into the topographic space based on a density function. Low-level features are extracted from frame to frame. Each frame is then treated as a point in the feature space. The density of each point is defined as the sum of the influence functions of all neighboring data points. The height function that is originally used in Watershed segmentation is then replaced by inverting the density at the point. Thus, all the highest density values are transformed into local minima. Subsequently, Watershed segmentation is performed in the topographic space. The intuitive idea under our method is that frames within a shot are highly agglomerative in the feature space and have higher possibilities to be merged together, while those frames between shots representing the shot changes are not, hence they have less density values and are less likely to be clustered by carefully extracting the markers and choosing the stopping criterion.
Retinal slit lamp video mosaicking.
De Zanet, Sandro; Rudolph, Tobias; Richa, Rogerio; Tappeiner, Christoph; Sznitman, Raphael
2016-06-01
To this day, the slit lamp remains the first tool used by an ophthalmologist to examine patient eyes. Imaging of the retina poses, however, a variety of problems, namely a shallow depth of focus, reflections from the optical system, a small field of view and non-uniform illumination. For ophthalmologists, the use of slit lamp images for documentation and analysis purposes, however, remains extremely challenging due to large image artifacts. For this reason, we propose an automatic retinal slit lamp video mosaicking, which enlarges the field of view and reduces amount of noise and reflections, thus enhancing image quality. Our method is composed of three parts: (i) viable content segmentation, (ii) global registration and (iii) image blending. Frame content is segmented using gradient boosting with custom pixel-wise features. Speeded-up robust features are used for finding pair-wise translations between frames with robust random sample consensus estimation and graph-based simultaneous localization and mapping for global bundle adjustment. Foreground-aware blending based on feathering merges video frames into comprehensive mosaics. Foreground is segmented successfully with an area under the curve of the receiver operating characteristic curve of 0.9557. Mosaicking results and state-of-the-art methods were compared and rated by ophthalmologists showing a strong preference for a large field of view provided by our method. The proposed method for global registration of retinal slit lamp images of the retina into comprehensive mosaics improves over state-of-the-art methods and is preferred qualitatively.
Towards a New Theory of Gender Inequities in Labour Market Outcomes of Education.
ERIC Educational Resources Information Center
Quinlan, Liz
Attempts to explain sex-related wage differentials generally rely on the human capital and segmentation labor market theories. The human capital theory explains individuals' position in the labor market primarily in terms of factors determining their productivity, whereas segmentation theory focuses on differences among jobs as determinants of the…
Match Your Hardwood Lumber to Current Market Needs
Robert J. Bush; Steven A. Sinclair; Philip A. Araman
1990-01-01
This article explains how hardwood lumber producers can best market their product. The study included four segments of the market for hardwood lumber. These segments were: furniture, cabinet, dimension and flooring, and molding/millwork manufacturers. The article explains how the study was conducted and the characteristics of companies (i.e., potential customers) that...
The impact of video technology on learning: A cooking skills experiment.
Surgenor, Dawn; Hollywood, Lynsey; Furey, Sinéad; Lavelle, Fiona; McGowan, Laura; Spence, Michelle; Raats, Monique; McCloat, Amanda; Mooney, Elaine; Caraher, Martin; Dean, Moira
2017-07-01
This study examines the role of video technology in the development of cooking skills. The study explored the views of 141 female participants on whether video technology can promote confidence in learning new cooking skills to assist in meal preparation. Prior to each focus group participants took part in a cooking experiment to assess the most effective method of learning for low-skilled cooks across four experimental conditions (recipe card only; recipe card plus video demonstration; recipe card plus video demonstration conducted in segmented stages; and recipe card plus video demonstration whereby participants freely accessed video demonstrations as and when needed). Focus group findings revealed that video technology was perceived to assist learning in the cooking process in the following ways: (1) improved comprehension of the cooking process; (2) real-time reassurance in the cooking process; (3) assisting the acquisition of new cooking skills; and (4) enhancing the enjoyment of the cooking process. These findings display the potential for video technology to promote motivation and confidence as well as enhancing cooking skills among low-skilled individuals wishing to cook from scratch using fresh ingredients. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lithospheric buckling and intra-arc stresses: A mechanism for arc segmentation
NASA Technical Reports Server (NTRS)
Nelson, Kerri L.
1989-01-01
Comparison of segment development of a number of arcs has shown that consistent relationships between segmentation, volcanism and variable stresses exists. Researchers successfully modeled these relationships using the conceptual model of lithospheric buckling of Yamaoka et al. (1986; 1987). Lithosphere buckling (deformation) provides the needed mechanism to explain segmentation phenomenon; offsets in volcanic fronts, distribution of calderas within segments, variable segment stresses and the chemical diversity seen between segment boundary and segment interior magmas.
Logo recognition in video by line profile classification
NASA Astrophysics Data System (ADS)
den Hollander, Richard J. M.; Hanjalic, Alan
2003-12-01
We present an extension to earlier work on recognizing logos in video stills. The logo instances considered here are rigid planar objects observed at a distance in the scene, so the possible perspective transformation can be approximated by an affine transformation. For this reason we can classify the logos by matching (invariant) line profiles. We enhance our previous method by considering multiple line profiles instead of a single profile of the logo. The positions of the lines are based on maxima in the Hough transform space of the segmented logo foreground image. Experiments are performed on MPEG1 sport video sequences to show the performance of the proposed method.
Creating Video Games in a Middle School Language Arts Classroom: A Narrative Account
ERIC Educational Resources Information Center
Oldaker, Adam
2010-01-01
This article describes the author's experience co-facilitating a project for which seventh-grade students designed and created original video games based on Madeleine L'Engle's "A Wrinkle in Time". The author provides an overview of recent literature on video game implementation in the classroom and explains how the project was designed and…
Teaching English Using Video Materials: Design and Delivery of a Practical Course
ERIC Educational Resources Information Center
Lopez-Alvarado, Julio
2017-01-01
In this paper, a practical course for listening, speaking, reading and writing was designed using authentic video material. The aim of this paper is to offer tools to the TEFL teacher in order to design new course materials using video material. The development procedure is explained in detail, and the underpinning main theories are also…
Quantum Electrodynamics: Theory
Lincoln, Don
2018-01-16
The Standard Model of particle physics is composed of several theories that are added together. The most precise component theory is the theory of quantum electrodynamics or QED. In this video, Fermilabâs Dr. Don Lincoln explains how theoretical QED calculations can be done. This video links to other videos, giving the viewer a deep understanding of the process.
ERIC Educational Resources Information Center
Herder, P. M.; Subrahmanian, E.; Talukdar, S.; Turk, A. L.; Westerberg, A. W.
2002-01-01
Explains distance education approach applied to the 'Engineering Design Problem Formulation' course simultaneously at the Delft University of Technology (the Netherlands) and at Carnegie Mellon University (CMU, Pittsburgh, USA). Uses video taped lessons, video conferencing, electronic mails and web-accessible document management system LIRE in the…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-30
... demo video showing your application in action. Post videos to video-sharing sites like YouTube. (3... maximum of 10 slides. We strongly recommend you explain how you addressed the evaluation criteria and the... Use--20% Are you able to search for information easily? Is the requested information, both text and...
What Can Video Games Teach Us about Teaching Reading?
ERIC Educational Resources Information Center
Compton-Lilly, Catherine
2007-01-01
James Gee has suggested that video games can teach us important lessons about learning and that we can learn about teaching from these games. Reading research and the words of the author's daughter are the basis of an exploration of the learning principles Gee identifies. He explains that video games are successful in engaging children and…
The Simple Video Coder: A free tool for efficiently coding social video data.
Barto, Daniel; Bird, Clark W; Hamilton, Derek A; Fink, Brandi C
2017-08-01
Videotaping of experimental sessions is a common practice across many disciplines of psychology, ranging from clinical therapy, to developmental science, to animal research. Audio-visual data are a rich source of information that can be easily recorded; however, analysis of the recordings presents a major obstacle to project completion. Coding behavior is time-consuming and often requires ad-hoc training of a student coder. In addition, existing software is either prohibitively expensive or cumbersome, which leaves researchers with inadequate tools to quickly process video data. We offer the Simple Video Coder-free, open-source software for behavior coding that is flexible in accommodating different experimental designs, is intuitive for students to use, and produces outcome measures of event timing, frequency, and duration. Finally, the software also offers extraction tools to splice video into coded segments suitable for training future human coders or for use as input for pattern classification algorithms.
School age test or procedure preparation
... your child with books, bubbles, games, hand-held video games, or other activities. PLAY PREPARATION Children often avoid responding when asked direct questions about their feelings. Some ... from videos that show children of the same age explaining, ...
Adolescent test or procedure preparation
... someone else) during the procedure Playing hand-held video games Using guided imagery Trying other distractions, such as listening to music through headphones, if allowed When possible, let ... from videos that show adolescents of the same age explaining ...
Six characteristics of nutrition education videos that support learning and motivation to learn.
Ramsay, Samantha A; Holyoke, Laura; Branen, Laurel J; Fletcher, Janice
2012-01-01
To identify characteristics in nutrition education video vignettes that support learning and motivation to learn about feeding children. Nine focus group interviews were conducted with child care providers in child care settings from 4 states in the western United States: California, Idaho, Oregon, and Washington. At each focus group interview, 3-8 participants (n = 37) viewed video vignettes and participated in a facilitated focus group discussion that was audiorecorded, transcribed, and analyzed. Primary characteristics of video vignettes child care providers perceived as supporting learning and motivation to learn about feeding young children were identified: (1) use real scenarios; (2) provide short segments; (3) present simple, single messages; (4) convey a skill-in-action; (5) develop the videos so participants can relate to the settings; and (6) support participants' ability to conceptualize the information. These 6 characteristics can be used by nutrition educators in selecting and developing videos in nutrition education. Copyright © 2012 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
Real-time people counting system using a single video camera
NASA Astrophysics Data System (ADS)
Lefloch, Damien; Cheikh, Faouzi A.; Hardeberg, Jon Y.; Gouton, Pierre; Picot-Clemente, Romain
2008-02-01
There is growing interest in video-based solutions for people monitoring and counting in business and security applications. Compared to classic sensor-based solutions the video-based ones allow for more versatile functionalities, improved performance with lower costs. In this paper, we propose a real-time system for people counting based on single low-end non-calibrated video camera. The two main challenges addressed in this paper are: robust estimation of the scene background and the number of real persons in merge-split scenarios. The latter is likely to occur whenever multiple persons move closely, e.g. in shopping centers. Several persons may be considered to be a single person by automatic segmentation algorithms, due to occlusions or shadows, leading to under-counting. Therefore, to account for noises, illumination and static objects changes, a background substraction is performed using an adaptive background model (updated over time based on motion information) and automatic thresholding. Furthermore, post-processing of the segmentation results is performed, in the HSV color space, to remove shadows. Moving objects are tracked using an adaptive Kalman filter, allowing a robust estimation of the objects future positions even under heavy occlusion. The system is implemented in Matlab, and gives encouraging results even at high frame rates. Experimental results obtained based on the PETS2006 datasets are presented at the end of the paper.
NASA Technical Reports Server (NTRS)
1992-01-01
The Geotail mission, part of the International Solar Terrestrial Physics (ISTP) program, measures global energy flow and transformation in the magnetotail to increase understanding of fundamental magnetospheric processes. The satellite was launched on July 24, 1992 onboard a Delta II rocket. This video shows with animation the solar wind, and its effect on the Earth. The narrator explains that the Geotail spacecraft was designed and built by the Institute of Space and Astronautical Science (ISAS), the Japanese Space Agency. The mission objectives are reviewed by one of the scientist in a live view. The video also shows an animation of the orbit, while the narrator explains the orbit and the reason for the small launch window.
Fernandez-Miranda, Juan C
2018-06-07
The medial temporal lobe can be divided in anterior, middle, and posterior segments. The anterior segment is formed by the uncus and hippocampal head, and it has extra and intraventricular structures. There are 2 main approaches to the uncohippocampal region, the anteromedial temporal lobectomy (Spencer's technique) and the transsylvian selective amygdalohippocampectomy (Yasargil's technique).In this video, we present the case of a 29-yr-old man with new onset of generalized seizures and a contrast-enhancing lesion in the left anterior segment of the medial temporal lobe compatible with high-grade glioma. He had a medical history of cervical astrocytoma at age 8 requiring craniospinal radiation therapy and ventriculoperitoneal shunt placement.The tumor was approached using a combined transsylvian transcisternal and transinferior insular sulcus approach to the extra and intraventricular aspects of the uncohippocampal region. It was resected completely, and the patient was neurologically intact after resection with no further seizures at 6-mo follow-up. The diagnosis was glioblastoma IDH-wild type, for which he underwent adjuvant therapy.Surgical anatomy and technical nuances of this approach are illustrated using a 3-dimensional video and anatomic dissections. The selective approach, when compared to an anteromedial temporal lobectomy, has the advantage of preserving the anterolateral temporal cortex, which is particularly relevant in dominant-hemisphere lesions, and the related fiber tracts, including the inferior fronto-occipital and inferior longitudinal fascicles, and most of the optic radiation fibers. The transsylvian approach, however, is technically and anatomically more challenging and potentially carries a higher risk of vascular injury and vasospasm.Page 1 and figures from Fernández-Miranda JC et al, Microvascular Anatomy of the Medial Temporal Region: Part 1: Its Application to Arteriovenous Malformation Surgery, Operative Neurosurgery, 2010, Volume 67, issue 3, ons237-ons276, by permission of the Congress of Neurological Surgeons (1:26-1:37 in video).Page 1 from Fernández-Miranda JC et al, Three-Dimensio-nal Microsurgical and Tractographic Anatomy of the White Matter of the Human Brain, Neurosurgery, 2008, Volume 62, issue suppl_3, SHC989-SHC1028, by permission of the Congress of Neurological Surgeons (1:54-1:56 in video).
Wilson, Michael E; Krupa, Artur; Hinds, Richard F; Litell, John M; Swetz, Keith M; Akhoundi, Abbasali; Kashyap, Rahul; Gajic, Ognjen; Kashani, Kianoush
2015-03-01
To determine if a video depicting cardiopulmonary resuscitation and resuscitation preference options would improve knowledge and decision making among patients and surrogates in the ICU. Randomized, unblinded trial. Single medical ICU. Patients and surrogate decision makers in the ICU. The usual care group received a standard pamphlet about cardiopulmonary resuscitation and cardiopulmonary resuscitation preference options plus routine code status discussions with clinicians. The video group received usual care plus an 8-minute video that depicted cardiopulmonary resuscitation, showed a simulated hospital code, and explained resuscitation preference options. One hundred three patients and surrogates were randomized to usual care. One hundred five patients and surrogates were randomized to video plus usual care. Median total knowledge scores (0-15 points possible for correct answers) in the video group were 13 compared with 10 in the usual care group, p value of less than 0.0001. Video group participants had higher rates of understanding the purpose of cardiopulmonary resuscitation and resuscitation options and terminology and could correctly name components of cardiopulmonary resuscitation. No statistically significant differences in documented resuscitation preferences following the interventions were found between the two groups, although the trial was underpowered to detect such differences. A majority of participants felt that the video was helpful in cardiopulmonary resuscitation decision making (98%) and would recommend the video to others (99%). A video depicting cardiopulmonary resuscitation and explaining resuscitation preference options was associated with improved knowledge of in-hospital cardiopulmonary resuscitation options and cardiopulmonary resuscitation terminology among patients and surrogate decision makers in the ICU, compared with receiving a pamphlet on cardiopulmonary resuscitation. Patients and surrogates found the video helpful in decision making and would recommend the video to others.
STS-107 Mission Highlights Resource, Part 4 of 4
NASA Technical Reports Server (NTRS)
2003-01-01
This video, Part 4 of 4, shows the activities of the STS-107 crew during flight days 13 through 15 of the Columbia orbiter's final flight. The crew consists of Commander Rick Husband, Pilot William McCool, Payload Commander Michael Anderson, Mission Specialists David Brown, Kalpana Chawla, and Laurel Clark, and Payload Specialist Ilan Ramon. The highlight of flight day 13 is Kalpana Chawla conversing with Mission Control Center in Houston during troubleshooting of the Combustion Module in a recovery procedure to get the MIST fire suppression experiment back online. Chawla is shown replacing an atomizer head. At Mission Control Center a vase of flowers commemorating the astronauts who died on board Space Shuttle Challenger's final flight is shown and explained. The footage of flight day 14 consists of a tour of Columbia's flight deck, middeck, and Spacehab research module. Rick Husband narrates the tour, which features Kalpana Chawla, Laurel Clark, and himself. The astronauts demonstrate hygene, a dining tray, the orbiter's toilet, and a space iron, which is a rack for strapping down shirts. The Earth limb is shown with the Spacehab module in the foreground. Clark exercises on a bicycle for a respiration experiment, and demonstrates how a compact disk player gyrates in microgravity. On flight day 15, the combustion module is running again, and footage is shown of the Water Mist Fire-Suppression Experiment (Mist) in operation. Laurel Clark narrates a segment of the video in which Ilan Ramon exercises on a bicycle, Rick Husband, Kalpana Chawla, and Ramon demonstrate spinning and push-ups in the Spacehab module, and Clark demonstrates eating from a couple of food packets. The video ends with a shot of the Earth limb reflected on the radiator on the inside of Columbia's open payload bay door with the Earth in the background.
Li, Yixian; Qi, Lehua; Song, Yongshan; Chao, Xujiang
2017-06-01
The components of carbon/carbon (C/C) composites have significant influence on the thermal and mechanical properties, so a quantitative characterization of component is necessary to study the microstructure of C/C composites, and further to improve the macroscopic properties of C/C composites. Considering the extinction crosses of the pyrocarbon matrix have significant moving features, the polarized light microscope (PLM) video is used to characterize C/C composites quantitatively because it contains sufficiently dynamic and structure information. Then the optical flow method is introduced to compute the optical flow field between the adjacent frames, and segment the components of C/C composites from PLM image by image processing. Meanwhile the matrix with different textures is re-segmented by the length difference of motion vectors, and then the component fraction of each component and extinction angle of pyrocarbon matrix are calculated directly. Finally, the C/C composites are successfully characterized from three aspects of carbon fiber, pyrocarbon, and pores by a series of image processing operators based on PLM video, and the errors of component fractions are less than 15%. © 2017 Wiley Periodicals, Inc.
Robust vehicle detection in different weather conditions: Using MIPM
Menéndez, José Manuel; Jiménez, David
2018-01-01
Intelligent Transportation Systems (ITS) allow us to have high quality traffic information to reduce the risk of potentially critical situations. Conventional image-based traffic detection methods have difficulties acquiring good images due to perspective and background noise, poor lighting and weather conditions. In this paper, we propose a new method to accurately segment and track vehicles. After removing perspective using Modified Inverse Perspective Mapping (MIPM), Hough transform is applied to extract road lines and lanes. Then, Gaussian Mixture Models (GMM) are used to segment moving objects and to tackle car shadow effects, we apply a chromacity-based strategy. Finally, performance is evaluated through three different video benchmarks: own recorded videos in Madrid and Tehran (with different weather conditions at urban and interurban areas); and two well-known public datasets (KITTI and DETRAC). Our results indicate that the proposed algorithms are robust, and more accurate compared to others, especially when facing occlusions, lighting variations and weather conditions. PMID:29513664
Real time markerless motion tracking using linked kinematic chains
Luck, Jason P [Arvada, CO; Small, Daniel E [Albuquerque, NM
2007-08-14
A markerless method is described for tracking the motion of subjects in a three dimensional environment using a model based on linked kinematic chains. The invention is suitable for tracking robotic, animal or human subjects in real-time using a single computer with inexpensive video equipment, and does not require the use of markers or specialized clothing. A simple model of rigid linked segments is constructed of the subject and tracked using three dimensional volumetric data collected by a multiple camera video imaging system. A physics based method is then used to compute forces to align the model with subsequent volumetric data sets in real-time. The method is able to handle occlusion of segments and accommodates joint limits, velocity constraints, and collision constraints and provides for error recovery. The method further provides for elimination of singularities in Jacobian based calculations, which has been problematic in alternative methods.
Shor, Eran; Seida, Kimberly
2018-04-18
It is a common notion among many scholars and pundits that the pornography industry becomes "harder and harder" with every passing year. Some have suggested that porn viewers, who are mostly men, become desensitized to "soft" pornography, and producers are happy to generate videos that are more hard core, resulting in a growing demand for and supply of violent and degrading acts against women in mainstream pornographic videos. We examined this accepted wisdom by utilizing a sample of 269 popular videos uploaded to PornHub over the past decade. More specifically, we tested two related claims: (1) aggressive content in videos is on the rise and (2) viewers prefer such content, reflected in both the number of views and the rankings for videos containing aggression. Our results offer no support for these contentions. First, we did not find any consistent uptick in aggressive content over the past decade; in fact, the average video today contains shorter segments showing aggression. Second, videos containing aggressive acts are both less likely to receive views and less likely to be ranked favorably by viewers, who prefer videos where women clearly perform pleasure.
Using "Making Sense of Climate Science Denial" MOOC videos in a college course
NASA Astrophysics Data System (ADS)
Schuenemann, K. C.; Cook, J.
2015-12-01
The Massive Open Online Course (MOOC) "Denial101x: Making Sense of Climate Science Denial" teaches students to make sense of the science and respond to climate change denial. The course is made up of a series of short, myth-debunking lecture videos that can be strategically used in college courses. The videos and the visuals within have proven a great resource for an introductory college level climate change course. Methods for using the videos in both online and in-classroom courses will be presented, as well as student reactions and learning from the videos. The videos introduce and explain a climate science topic, then paraphrase a common climate change myth, explain why the myth is wrong by identifying the characteristic of climate denial used, and concludes by reinforcing the correct science. By focusing on common myths, the MOOC has made an archive of videos that can be used by anyone in need of a 5-minute response to debunk a myth. By also highlighting five characteristics of climate denial: fake experts, logical fallacies, impossible expectations, cherry picking, and conspiracy theories (FLICC), the videos also teach the viewer the skills they need to critically examine myths they may encounter in the real world on a variety of topics. The videos also include a series of expert scientist interviews that can be used to drive home points, as well as put some faces to the science. These videos are freely available outside of the MOOC and can be found under the relevant "Most used climate myths" section on the skepticalscience.com webpage, as well as directly on YouTube. Discover ideas for using videos in future courses, regardless of discipline.
Segmentation: Slicing the Urban Pie.
ERIC Educational Resources Information Center
Keim, William A.
1981-01-01
Explains market segmentation and defines undifferentiated, concentrated, and differentiated marketing strategies. Describes in detail the marketing planning process at the Metropolitan Community Colleges. Focuses on the development and implementation of an ongoing recruitment program designed for the market segment composed of business employees.…
Identification of GHB and morphine in hair in a case of drug-facilitated sexual assault.
Rossi, Riccardo; Lancia, Massimo; Gambelunghe, Cristiana; Oliva, Antonio; Fucci, Nadia
2009-04-15
The authors present the case of a 24-year-old girl who was sexually assaulted after administration of gamma-hydroxybutyrate (GHB) and morphine. She had been living in an international college for foreign students for about 1 year and often complained of a general unhealthy feeling in the morning. At the end of the college period she returned to Italy and received at home some video clips shot by a mobile phone camera. In these videos she was having sex with a boy she met when she was studying abroad. Toxicological analysis of her hair was done: the hair was 20-cm long. A 2/3-cm segmentation of all the length of the hair was performed. Morphine and GHB were detected in hair segments related to the period of time she was abroad. The analyses of hair segments were performed by gas chromatography/mass spectrometry (GC/MS) and the concentration of morphine and GHB were calculated. A higher value of GHB was found in the period associated with the possible criminal activity and was also associated with the presence of morphine in the same period.
Kinematics of the field hockey penalty corner push-in.
Kerr, Rebecca; Ness, Kevin
2006-01-01
The aims of the study were to determine those variables that significantly affect push-in execution and thereby formulate coaching recommendations specific to the push-in. Two 50 Hz video cameras recorded transverse and longitudinal views of push-in trials performed by eight experienced and nine inexperienced male push-in performers. Video footage was digitized for data analysis of ball speed, stance width, drag distance, drag time, drag speed, centre of massy displacement and segment and stick displacements and velocities. Experienced push-in performers demonstrated a significantly greater (p < 0.05) stance width, a significantly greater distance between the ball and the front foot at the start of the push-in and a significantly faster ball speed than inexperienced performers. In addition, the experienced performers showed a significant positive correlation between ball speed and playing experience and tended to adopt a combination of simultaneous and sequential segment rotation to achieve accuracy and fast ball speed. The study yielded the following coaching recommendations for enhanced push-in performance: maximize drag distance by maximizing front foot-ball distance at the start of the push-in; use a combination of simultaneous and sequential segment rotations to optimise both accuracy and ball speed and maximize drag speed.
NASA Astrophysics Data System (ADS)
Jiang, Yang; Gong, Yuanzheng; Wang, Thomas D.; Seibel, Eric J.
2017-02-01
Multimodal endoscopy, with fluorescence-labeled probes binding to overexpressed molecular targets, is a promising technology to visualize early-stage cancer. T/B ratio is the quantitative analysis used to correlate fluorescence regions to cancer. Currently, T/B ratio calculation is post-processing and does not provide real-time feedback to the endoscopist. To achieve real-time computer assisted diagnosis (CAD), we establish image processing protocols for calculating T/B ratio and locating high-risk fluorescence regions for guiding biopsy and therapy in Barrett's esophagus (BE) patients. Methods: Chan-Vese algorithm, an active contour model, is used to segment high-risk regions in fluorescence videos. A semi-implicit gradient descent method was applied to minimize the energy function of this algorithm and evolve the segmentation. The surrounding background was then identified using morphology operation. The average T/B ratio was computed and regions of interest were highlighted based on user-selected thresholding. Evaluation was conducted on 50 fluorescence videos acquired from clinical video recordings using a custom multimodal endoscope. Results: With a processing speed of 2 fps on a laptop computer, we obtained accurate segmentation of high-risk regions examined by experts. For each case, the clinical user could optimize target boundary by changing the penalty on area inside the contour. Conclusion: Automatic and real-time procedure of calculating T/B ratio and identifying high-risk regions of early esophageal cancer was developed. Future work will increase processing speed to <5 fps, refine the clinical interface, and apply to additional GI cancers and fluorescence peptides.
Microsurgical Clipping of an Unruptured Carotid Cave Aneurysm: 3-Dimensional Operative Video.
Tabani, Halima; Yousef, Sonia; Burkhardt, Jan-Karl; Gandhi, Sirin; Benet, Arnau; Lawton, Michael T
2017-08-01
Most aneurysms originating from the clinoidal segment of the internal carotid artery (ICA) are nowadays managed conservatively, treated endovascularly with coiling (with or without stenting) or flow diverters. However, microsurgical clip occlusion remains an alternative. This video demonstrates clip occlusion of an unruptured right carotid cave aneurysm measuring 7 mm in a 39-year-old woman. The patient opted for surgery because of concerns about prolonged antiplatelet use associated with endovascular therapy. After patient consent, a standard pterional craniotomy was performed followed by extradural anterior clinoidectomy. After dural opening and sylvian fissure split, a clinoidal flap was opened to enter the extradural space around the clinoidal segment. The dural ring was dissected circumferentially, freeing the medial wall of the ICA down to the sellar region and mobilizing the ICA out of its canal of the clinoidal segment. With the aneurysm neck in view, the aneurysm was clipped with a 45° angled fenestrated clip over the ICA. Indocyanine green angiography confirmed no further filling of the aneurysm and patency of the ICA. Complete aneurysm occlusion was confirmed with postoperative angiography, and the patient had no neurologic deficits (Video 1). This case demonstrates the importance of anterior clinoidectomy and thorough distal dural ring dissection for effective clipping of carotid cave aneurysms. Control of venous bleeding from the cavernous sinus with fibrin glue injection simplifies the dissection, which should minimize manipulation of the optic nerve. Knowledge of this anatomy and proficiency with these techniques is important in an era of declining open aneurysm cases. Copyright © 2017 Elsevier Inc. All rights reserved.
Geographic Video 3d Data Model And Retrieval
NASA Astrophysics Data System (ADS)
Han, Z.; Cui, C.; Kong, Y.; Wu, H.
2014-04-01
Geographic video includes both spatial and temporal geographic features acquired through ground-based or non-ground-based cameras. With the popularity of video capture devices such as smartphones, the volume of user-generated geographic video clips has grown significantly and the trend of this growth is quickly accelerating. Such a massive and increasing volume poses a major challenge to efficient video management and query. Most of the today's video management and query techniques are based on signal level content extraction. They are not able to fully utilize the geographic information of the videos. This paper aimed to introduce a geographic video 3D data model based on spatial information. The main idea of the model is to utilize the location, trajectory and azimuth information acquired by sensors such as GPS receivers and 3D electronic compasses in conjunction with video contents. The raw spatial information is synthesized to point, line, polygon and solid according to the camcorder parameters such as focal length and angle of view. With the video segment and video frame, we defined the three categories geometry object using the geometry model of OGC Simple Features Specification for SQL. We can query video through computing the spatial relation between query objects and three categories geometry object such as VFLocation, VSTrajectory, VSFOView and VFFovCone etc. We designed the query methods using the structured query language (SQL) in detail. The experiment indicate that the model is a multiple objective, integration, loosely coupled, flexible and extensible data model for the management of geographic stereo video.
Remote Video Monitor of Vehicles in Cooperative Information Platform
NASA Astrophysics Data System (ADS)
Qin, Guofeng; Wang, Xiaoguo; Wang, Li; Li, Yang; Li, Qiyan
Detection of vehicles plays an important role in the area of the modern intelligent traffic management. And the pattern recognition is a hot issue in the area of computer vision. An auto- recognition system in cooperative information platform is studied. In the cooperative platform, 3G wireless network, including GPS, GPRS (CDMA), Internet (Intranet), remote video monitor and M-DMB networks are integrated. The remote video information can be taken from the terminals and sent to the cooperative platform, then detected by the auto-recognition system. The images are pretreated and segmented, including feature extraction, template matching and pattern recognition. The system identifies different models and gets vehicular traffic statistics. Finally, the implementation of the system is introduced.
ERIC Educational Resources Information Center
Lawrence, Michael A.
1985-01-01
"Narrowcasting" is information and entertainment aimed at specific population segments, including previously ignored minorities. Cable, satellite, videodisc, low-power television, and video cassette recorders may all help keep minorities from being "information poor." These elements, however, are expensive, and study is needed to understand how…
Assessment of YouTube videos as a source of information on medication use in pregnancy.
Hansen, Craig; Interrante, Julia D; Ailes, Elizabeth C; Frey, Meghan T; Broussard, Cheryl S; Godoshian, Valerie J; Lewis, Courtney; Polen, Kara N D; Garcia, Amanda P; Gilboa, Suzanne M
2016-01-01
When making decisions about medication use in pregnancy, women consult many information sources, including the Internet. The aim of this study was to assess the content of publicly accessible YouTube videos that discuss medication use in pregnancy. Using 2023 distinct combinations of search terms related to medications and pregnancy, we extracted metadata from YouTube videos using a YouTube video Application Programming Interface. Relevant videos were defined as those with a medication search term and a pregnancy-related search term in either the video title or description. We viewed relevant videos and abstracted content from each video into a database. We documented whether videos implied each medication to be "safe" or "unsafe" in pregnancy and compared that assessment with the medication's Teratogen Information System (TERIS) rating. After viewing 651 videos, 314 videos with information about medication use in pregnancy were available for the final analyses. The majority of videos were from law firms (67%), television segments (10%), or physicians (8%). Selective serotonin reuptake inhibitors (SSRIs) were the most common medication class named (225 videos, 72%), and 88% of videos about SSRIs indicated that they were unsafe for use in pregnancy. However, the TERIS ratings for medication products in this class range from "unlikely" to "minimal" teratogenic risk. For the majority of medications, current YouTube video content does not adequately reflect what is known about the safety of their use in pregnancy and should be interpreted cautiously. However, YouTube could serve as a platform for communicating evidence-based medication safety information. Copyright © 2015 John Wiley & Sons, Ltd.
Spatio-Temporal Video Segmentation with Shape Growth or Shrinkage Constraint
NASA Technical Reports Server (NTRS)
Tarabalka, Yuliya; Charpiat, Guillaume; Brucker, Ludovic; Menze, Bjoern H.
2014-01-01
We propose a new method for joint segmentation of monotonously growing or shrinking shapes in a time sequence of noisy images. The task of segmenting the image time series is expressed as an optimization problem using the spatio-temporal graph of pixels, in which we are able to impose the constraint of shape growth or of shrinkage by introducing monodirectional infinite links connecting pixels at the same spatial locations in successive image frames. The globally optimal solution is computed with a graph cut. The performance of the proposed method is validated on three applications: segmentation of melting sea ice floes and of growing burned areas from time series of 2D satellite images, and segmentation of a growing brain tumor from sequences of 3D medical scans. In the latter application, we impose an additional intersequences inclusion constraint by adding directed infinite links between pixels of dependent image structures.
Shuttle Lesson Learned - Toxicology
NASA Technical Reports Server (NTRS)
James, John T.
2010-01-01
This is a script for a video about toxicology and the space shuttle. The first segment is deals with dust in the space vehicle. The next segment will be about archival samples. Then we'll look at real time on-board analyzers that give us a lot of capability in terms of monitoring for combustion products and the ability to monitor volatile organics on the station. Finally we will look at other issues that are about setting limits and dealing with ground based lessons that pertain to toxicology.
Adaptive maritime video surveillance
NASA Astrophysics Data System (ADS)
Gupta, Kalyan Moy; Aha, David W.; Hartley, Ralph; Moore, Philip G.
2009-05-01
Maritime assets such as ports, harbors, and vessels are vulnerable to a variety of near-shore threats such as small-boat attacks. Currently, such vulnerabilities are addressed predominantly by watchstanders and manual video surveillance, which is manpower intensive. Automatic maritime video surveillance techniques are being introduced to reduce manpower costs, but they have limited functionality and performance. For example, they only detect simple events such as perimeter breaches and cannot predict emerging threats. They also generate too many false alerts and cannot explain their reasoning. To overcome these limitations, we are developing the Maritime Activity Analysis Workbench (MAAW), which will be a mixed-initiative real-time maritime video surveillance tool that uses an integrated supervised machine learning approach to label independent and coordinated maritime activities. It uses the same information to predict anomalous behavior and explain its reasoning; this is an important capability for watchstander training and for collecting performance feedback. In this paper, we describe MAAW's functional architecture, which includes the following pipeline of components: (1) a video acquisition and preprocessing component that detects and tracks vessels in video images, (2) a vessel categorization and activity labeling component that uses standard and relational supervised machine learning methods to label maritime activities, and (3) an ontology-guided vessel and maritime activity annotator to enable subject matter experts (e.g., watchstanders) to provide feedback and supervision to the system. We report our findings from a preliminary system evaluation on river traffic video.
A spatiotemporal decomposition strategy for personal home video management
NASA Astrophysics Data System (ADS)
Yi, Haoran; Kozintsev, Igor; Polito, Marzia; Wu, Yi; Bouguet, Jean-Yves; Nefian, Ara; Dulong, Carole
2007-01-01
With the advent and proliferation of low cost and high performance digital video recorder devices, an increasing number of personal home video clips are recorded and stored by the consumers. Compared to image data, video data is lager in size and richer in multimedia content. Efficient access to video content is expected to be more challenging than image mining. Previously, we have developed a content-based image retrieval system and the benchmarking framework for personal images. In this paper, we extend our personal image retrieval system to include personal home video clips. A possible initial solution to video mining is to represent video clips by a set of key frames extracted from them thus converting the problem into an image search one. Here we report that a careful selection of key frames may improve the retrieval accuracy. However, because video also has temporal dimension, its key frame representation is inherently limited. The use of temporal information can give us better representation for video content at semantic object and concept levels than image-only based representation. In this paper we propose a bottom-up framework to combine interest point tracking, image segmentation and motion-shape factorization to decompose the video into spatiotemporal regions. We show an example application of activity concept detection using the trajectories extracted from the spatio-temporal regions. The proposed approach shows good potential for concise representation and indexing of objects and their motion in real-life consumer video.
Motion video analysis using planar parallax
NASA Astrophysics Data System (ADS)
Sawhney, Harpreet S.
1994-04-01
Motion and structure analysis in video sequences can lead to efficient descriptions of objects and their motions. Interesting events in videos can be detected using such an analysis--for instance independent object motion when the camera itself is moving, figure-ground segregation based on the saliency of a structure compared to its surroundings. In this paper we present a method for 3D motion and structure analysis that uses a planar surface in the environment as a reference coordinate system to describe a video sequence. The motion in the video sequence is described as the motion of the reference plane, and the parallax motion of all the non-planar components of the scene. It is shown how this method simplifies the otherwise hard general 3D motion analysis problem. In addition, a natural coordinate system in the environment is used to describe the scene which can simplify motion based segmentation. This work is a part of an ongoing effort in our group towards video annotation and analysis for indexing and retrieval. Results from a demonstration system being developed are presented.
Development of a video-delivered relaxation treatment of late-life anxiety for veterans.
Gould, Christine E; Zapata, Aimee Marie L; Bruce, Janine; Bereknyei Merrell, Sylvia; Wetherell, Julie Loebach; O'Hara, Ruth; Kuhn, Eric; Goldstein, Mary K; Beaudreau, Sherry A
2017-10-01
Behavioral treatments reduce anxiety, yet many older adults may not have access to these efficacious treatments. To address this need, we developed and evaluated the feasibility and acceptability of a video-delivered anxiety treatment for older Veterans. This treatment program, BREATHE (Breathing, Relaxation, and Education for Anxiety Treatment in the Home Environment), combines psychoeducation, diaphragmatic breathing, and progressive muscle relaxation training with engagement in activities. A mixed methods concurrent study design was used to examine the clarity of the treatment videos. We conducted semi-structured interviews with 20 Veterans (M age = 69.5, SD = 7.3 years; 55% White, Non-Hispanic) and collected ratings of video clarity. Quantitative ratings revealed that 100% of participants generally or definitely could follow breathing and relaxation video instructions. Qualitative findings, however, demonstrated more variability in the extent to which each video segment was clear. Participants identified both immediate benefits and motivation challenges associated with a video-delivered treatment. Participants suggested that some patients may need encouragement, whereas others need face-to-face therapy. Quantitative ratings of video clarity and qualitative findings highlight the feasibility of a video-delivered treatment for older Veterans with anxiety. Our findings demonstrate the importance of ensuring patients can follow instructions provided in self-directed treatments and the role that an iterative testing process has in addressing these issues. Next steps include testing the treatment videos with older Veterans with anxiety disorders.
Layer-based buffer aware rate adaptation design for SHVC video streaming
NASA Astrophysics Data System (ADS)
Gudumasu, Srinivas; Hamza, Ahmed; Asbun, Eduardo; He, Yong; Ye, Yan
2016-09-01
This paper proposes a layer based buffer aware rate adaptation design which is able to avoid abrupt video quality fluctuation, reduce re-buffering latency and improve bandwidth utilization when compared to a conventional simulcast based adaptive streaming system. The proposed adaptation design schedules DASH segment requests based on the estimated bandwidth, dependencies among video layers and layer buffer fullness. Scalable HEVC video coding is the latest state-of-art video coding technique that can alleviate various issues caused by simulcast based adaptive video streaming. With scalable coded video streams, the video is encoded once into a number of layers representing different qualities and/or resolutions: a base layer (BL) and one or more enhancement layers (EL), each incrementally enhancing the quality of the lower layers. Such layer based coding structure allows fine granularity rate adaptation for the video streaming applications. Two video streaming use cases are presented in this paper. The first use case is to stream HD SHVC video over a wireless network where available bandwidth varies, and the performance comparison between proposed layer-based streaming approach and conventional simulcast streaming approach is provided. The second use case is to stream 4K/UHD SHVC video over a hybrid access network that consists of a 5G millimeter wave high-speed wireless link and a conventional wired or WiFi network. The simulation results verify that the proposed layer based rate adaptation approach is able to utilize the bandwidth more efficiently. As a result, a more consistent viewing experience with higher quality video content and minimal video quality fluctuations can be presented to the user.
Automatic topics segmentation for TV news video
NASA Astrophysics Data System (ADS)
Hmayda, Mounira; Ejbali, Ridha; Zaied, Mourad
2017-03-01
Automatic identification of television programs in the TV stream is an important task for operating archives. This article proposes a new spatio-temporal approach to identify the programs in TV stream into two main steps: First, a reference catalogue for video features visual jingles built. We operate the features that characterize the instances of the same program type to identify the different types of programs in the flow of television. The role of video features is to represent the visual invariants for each visual jingle using appropriate automatic descriptors for each television program. On the other hand, programs in television streams are identified by examining the similarity of the video signal for visual grammars in the catalogue. The main idea of the identification process is to compare the visual similarity of the video signal features in the flow of television to the catalogue. After presenting the proposed approach, the paper overviews encouraging experimental results on several streams extracted from different channels and compounds of several programs.
A Secure and Robust Object-Based Video Authentication System
NASA Astrophysics Data System (ADS)
He, Dajun; Sun, Qibin; Tian, Qi
2004-12-01
An object-based video authentication system, which combines watermarking, error correction coding (ECC), and digital signature techniques, is presented for protecting the authenticity between video objects and their associated backgrounds. In this system, a set of angular radial transformation (ART) coefficients is selected as the feature to represent the video object and the background, respectively. ECC and cryptographic hashing are applied to those selected coefficients to generate the robust authentication watermark. This content-based, semifragile watermark is then embedded into the objects frame by frame before MPEG4 coding. In watermark embedding and extraction, groups of discrete Fourier transform (DFT) coefficients are randomly selected, and their energy relationships are employed to hide and extract the watermark. The experimental results demonstrate that our system is robust to MPEG4 compression, object segmentation errors, and some common object-based video processing such as object translation, rotation, and scaling while securely preventing malicious object modifications. The proposed solution can be further incorporated into public key infrastructure (PKI).
NASA Astrophysics Data System (ADS)
Ciaramello, Francis M.; Hemami, Sheila S.
2007-02-01
For members of the Deaf Community in the United States, current communication tools include TTY/TTD services, video relay services, and text-based communication. With the growth of cellular technology, mobile sign language conversations are becoming a possibility. Proper coding techniques must be employed to compress American Sign Language (ASL) video for low-rate transmission while maintaining the quality of the conversation. In order to evaluate these techniques, an appropriate quality metric is needed. This paper demonstrates that traditional video quality metrics, such as PSNR, fail to predict subjective intelligibility scores. By considering the unique structure of ASL video, an appropriate objective metric is developed. Face and hand segmentation is performed using skin-color detection techniques. The distortions in the face and hand regions are optimally weighted and pooled across all frames to create an objective intelligibility score for a distorted sequence. The objective intelligibility metric performs significantly better than PSNR in terms of correlation with subjective responses.
NASA Astrophysics Data System (ADS)
Shimada, Satoshi; Azuma, Shouzou; Teranaka, Sayaka; Kojima, Akira; Majima, Yukie; Maekawa, Yasuko
We developed the system that knowledge could be discovered and shared cooperatively in the organization based on the SECI model of knowledge management. This system realized three processes by the following method. (1)A video that expressed skill is segmented into a number of scenes according to its contents. Tacit knowledge is shared in each scene. (2)Tacit knowledge is extracted by bulletin board linked to each scene. (3)Knowledge is acquired by repeatedly viewing the video scene with the comment that shows the technical content to be practiced. We conducted experiments that the system was used by nurses working for general hospitals. Experimental results show that the nursing practical knack is able to be collected by utilizing bulletin board linked to video scene. Results of this study confirmed the possibility of expressing the tacit knowledge of nurses' empirical nursing skills sensitively with a clue of video images.
ERIC Educational Resources Information Center
Kozma, Robert B.; Russell, Joel
1997-01-01
Examines how professional chemists and undergraduate chemistry students respond to chemistry-related video segments, graphs, animations, and equations. Discusses the role that surface features of representations play in the understanding of chemistry. Contains 36 references. (DDR)
JPRS Report, Soviet Union, Political Affairs.
1990-07-07
an increase in video rental places clubs and video viewing salons. In Kiev alone, there are more than 200 of them. Especially disquieting is the...In the process of being questioned, they explained they committed criminal acts under the influence of videos . We are also disturbed by the...grade publications, at times, I would even say, with an after- taste of " porno " cannot be but disturbing. Remember our history. Ivan Dmitriyevich
An intelligent crowdsourcing system for forensic analysis of surveillance video
NASA Astrophysics Data System (ADS)
Tahboub, Khalid; Gadgil, Neeraj; Ribera, Javier; Delgado, Blanca; Delp, Edward J.
2015-03-01
Video surveillance systems are of a great value for public safety. With an exponential increase in the number of cameras, videos obtained from surveillance systems are often archived for forensic purposes. Many automatic methods have been proposed to do video analytics such as anomaly detection and human activity recognition. However, such methods face significant challenges due to object occlusions, shadows and scene illumination changes. In recent years, crowdsourcing has become an effective tool that utilizes human intelligence to perform tasks that are challenging for machines. In this paper, we present an intelligent crowdsourcing system for forensic analysis of surveillance video that includes the video recorded as a part of search and rescue missions and large-scale investigation tasks. We describe a method to enhance crowdsourcing by incorporating human detection, re-identification and tracking. At the core of our system, we use a hierarchal pyramid model to distinguish the crowd members based on their ability, experience and performance record. Our proposed system operates in an autonomous fashion and produces a final output of the crowdsourcing analysis consisting of a set of video segments detailing the events of interest as one storyline.
Video attention deviation estimation using inter-frame visual saliency map analysis
NASA Astrophysics Data System (ADS)
Feng, Yunlong; Cheung, Gene; Le Callet, Patrick; Ji, Yusheng
2012-01-01
A viewer's visual attention during video playback is the matching of his eye gaze movement to the changing video content over time. If the gaze movement matches the video content (e.g., follow a rolling soccer ball), then the viewer keeps his visual attention. If the gaze location moves from one video object to another, then the viewer shifts his visual attention. A video that causes a viewer to shift his attention often is a "busy" video. Determination of which video content is busy is an important practical problem; a busy video is difficult for encoder to deploy region of interest (ROI)-based bit allocation, and hard for content provider to insert additional overlays like advertisements, making the video even busier. One way to determine the busyness of video content is to conduct eye gaze experiments with a sizable group of test subjects, but this is time-consuming and costineffective. In this paper, we propose an alternative method to determine the busyness of video-formally called video attention deviation (VAD): analyze the spatial visual saliency maps of the video frames across time. We first derive transition probabilities of a Markov model for eye gaze using saliency maps of a number of consecutive frames. We then compute steady state probability of the saccade state in the model-our estimate of VAD. We demonstrate that the computed steady state probability for saccade using saliency map analysis matches that computed using actual gaze traces for a range of videos with different degrees of busyness. Further, our analysis can also be used to segment video into shorter clips of different degrees of busyness by computing the Kullback-Leibler divergence using consecutive motion compensated saliency maps.
Vedel, Vincent; Chipman, Ariel D; Akam, Michael; Arthur, Wallace
2008-01-01
The evolution of arthropod segment number provides us with a paradox, because, whereas there is more than 20-fold variation in this character overall, most classes and orders of arthropods are composed of species that lack any variation in the number of segments. So, what is the origin of the higher-level variation? The centipede order Geophilomorpha is unusual because, with the exception of one of its families, all species exhibit intraspecific variation in segment number. Hence it provides an opportunity to investigate how segment number may change in a microevolutionary context. Here, we show that segment number can be directly altered by an environmental factor (temperature)-this is the first such demonstration for any arthropod. The direction of the effect is such that higher temperature during embryogenesis produces more segments. This potentially explains an intraspecific cline in the species concerned, Strigamia maritima, but it does not explain how such a cline is translated into the parallel interspecific pattern of lower-latitude species having more segments. Given the plastic nature of the intraspecific variation, its link with interspecific differences may lie in selection acting on developmental reaction norms.
Behrends, Marianne; Stiller, Gerald; Dudzinska, Agnieszka; Schneidewind, Sabine
2016-01-01
To improve medical students' competences in physical examination videos clips were created, with and without an explaining commentary. The uncommented videos show the communication and interaction between physician and patient during a physical examination, the commented videos show the single steps of the physical examination supplemented with an off-screen commentary emphasizing important facts. To investigate whether uncommented and more authentic videos are more helpful to practice a physical examination than commented videos we interviewed 133 students via online surveys. 72% of the students used the uncommented videos for practicing with others, compared to 55% using the commented videos. 37% of the students think that practical skills can be learned better with the uncommented videos. In general, 97% state that the videos helped them to improve their skills. Our findings indicate that the cinematic form of an educational video has an effect on learning behavior, learning success and didactic quality.
Greitemeyer, Tobias; McLatchie, Neil
2011-05-01
Past research has provided abundant evidence that playing violent video games increases aggressive behavior. So far, these effects have been explained mainly as the result of priming existing knowledge structures. The research reported here examined the role of denying humanness to other people in accounting for the effect that playing a violent video game has on aggressive behavior. In two experiments, we found that playing violent video games increased dehumanization, which in turn evoked aggressive behavior. Thus, it appears that video-game-induced aggressive behavior is triggered when victimizers perceive the victim to be less human.
Free-viewpoint video of human actors using multiple handheld Kinects.
Ye, Genzhi; Liu, Yebin; Deng, Yue; Hasler, Nils; Ji, Xiangyang; Dai, Qionghai; Theobalt, Christian
2013-10-01
We present an algorithm for creating free-viewpoint video of interacting humans using three handheld Kinect cameras. Our method reconstructs deforming surface geometry and temporal varying texture of humans through estimation of human poses and camera poses for every time step of the RGBZ video. Skeletal configurations and camera poses are found by solving a joint energy minimization problem, which optimizes the alignment of RGBZ data from all cameras, as well as the alignment of human shape templates to the Kinect data. The energy function is based on a combination of geometric correspondence finding, implicit scene segmentation, and correspondence finding using image features. Finally, texture recovery is achieved through jointly optimization on spatio-temporal RGB data using matrix completion. As opposed to previous methods, our algorithm succeeds on free-viewpoint video of human actors under general uncontrolled indoor scenes with potentially dynamic background, and it succeeds even if the cameras are moving.
Context-Aware Fusion of RGB and Thermal Imagery for Traffic Monitoring
Alldieck, Thiemo; Bahnsen, Chris H.; Moeslund, Thomas B.
2016-01-01
In order to enable a robust 24-h monitoring of traffic under changing environmental conditions, it is beneficial to observe the traffic scene using several sensors, preferably from different modalities. To fully benefit from multi-modal sensor output, however, one must fuse the data. This paper introduces a new approach for fusing color RGB and thermal video streams by using not only the information from the videos themselves, but also the available contextual information of a scene. The contextual information is used to judge the quality of a particular modality and guides the fusion of two parallel segmentation pipelines of the RGB and thermal video streams. The potential of the proposed context-aware fusion is demonstrated by extensive tests of quantitative and qualitative characteristics on existing and novel video datasets and benchmarked against competing approaches to multi-modal fusion. PMID:27869730
A microcomputer interface for a digital audio processor-based data recording system.
Croxton, T L; Stump, S J; Armstrong, W M
1987-10-01
An inexpensive interface is described that performs direct transfer of digitized data from the digital audio processor and video cassette recorder based data acquisition system designed by Bezanilla (1985, Biophys. J., 47:437-441) to an IBM PC/XT microcomputer. The FORTRAN callable software that drives this interface is capable of controlling the video cassette recorder and starting data collection immediately after recognition of a segment of previously collected data. This permits piecewise analysis of long intervals of data that would otherwise exceed the memory capability of the microcomputer.
A microcomputer interface for a digital audio processor-based data recording system.
Croxton, T L; Stump, S J; Armstrong, W M
1987-01-01
An inexpensive interface is described that performs direct transfer of digitized data from the digital audio processor and video cassette recorder based data acquisition system designed by Bezanilla (1985, Biophys. J., 47:437-441) to an IBM PC/XT microcomputer. The FORTRAN callable software that drives this interface is capable of controlling the video cassette recorder and starting data collection immediately after recognition of a segment of previously collected data. This permits piecewise analysis of long intervals of data that would otherwise exceed the memory capability of the microcomputer. PMID:3676444
(abstract) Geological Tour of Southwestern Mexico
NASA Technical Reports Server (NTRS)
Adams, Steven L.; Lang, Harold R.
1993-01-01
Nineteen Landsat Themic Mapper quarter scenes, coregistered at 28.5 m spatial resolution with three arc second digital topographic data, were used to create a movie, simulating a flight over the Guerrero and Mixteco terrains of southwestern Mexico. The flight path was chosen to elucidate important structural, stratigraphic, and geomorphic features. The video, available in VHS format, is a 360 second animation consisting of 10 800 total frames. The simulated velocity during three 120 second flight segments of the video is approximately 37 000 km per hour, traversing approximately 1 000 km on the ground.
Static hand gesture recognition from a video
NASA Astrophysics Data System (ADS)
Rokade, Rajeshree S.; Doye, Dharmpal
2011-10-01
A sign language (also signed language) is a language which, instead of acoustically conveyed sound patterns, uses visually transmitted sign patterns to convey meaning- "simultaneously combining hand shapes, orientation and movement of the hands". Sign languages commonly develop in deaf communities, which can include interpreters, friends and families of deaf people as well as people who are deaf or hard of hearing themselves. In this paper, we proposed a novel system for recognition of static hand gestures from a video, based on Kohonen neural network. We proposed algorithm to separate out key frames, which include correct gestures from a video sequence. We segment, hand images from complex and non uniform background. Features are extracted by applying Kohonen on key frames and recognition is done.
ASSESSMENT OF YOUTUBE VIDEOS AS A SOURCE OF INFORMATION ON MEDICATION USE IN PREGNANCY
Hansen, Craig; Interrante, Julia D; Ailes, Elizabeth C; Frey, Meghan T; Broussard, Cheryl S; Godoshian, Valerie J; Lewis, Courtney; Polen, Kara ND; Garcia, Amanda P; Gilboa, Suzanne M
2015-01-01
Background When making decisions about medication use in pregnancy, women consult many information sources, including the Internet. The aim of this study was to assess the content of publicly-accessible YouTube videos that discuss medication use in pregnancy. Methods Using 2,023 distinct combinations of search terms related to medications and pregnancy, we extracted metadata from YouTube videos using a YouTube video Application Programming Interface. Relevant videos were defined as those with a medication search term and a pregnancy-related search term in either the video title or description. We viewed relevant videos and abstracted content from each video into a database. We documented whether videos implied each medication to be ‘safe’ or ‘unsafe’ in pregnancy and compared that assessment with the medication’s Teratogen Information System (TERIS) rating. Results After viewing 651 videos, 314 videos with information about medication use in pregnancy were available for the final analyses. The majority of videos were from law firms (67%), television segments (10%), or physicians (8%). Selective serotonin reuptake inhibitors (SSRIs) were the most common medication class named (225 videos, 72%), and 88% percent of videos about SSRIs indicated they were ‘unsafe’ for use in pregnancy. However, the TERIS ratings for medication products in this class range from ‘unlikely’ to ‘minimal’ teratogenic risk. Conclusion For the majority of medications, current YouTube video content does not adequately reflect what is known about the safety of their use in pregnancy and should be interpreted cautiously. However, YouTube could serve as a valuable platform for communicating evidence-based medication safety information. PMID:26541372
ERIC Educational Resources Information Center
Rohrer, Daniel M.
"Cableshop" is an experimental cable television service offering three- to seven-minute broadcast segments of product or community information and using a combination of telephone, computer, and video technology. Viewers participating in the service will have a choice of items ready for viewing listed on a "menu" channel and…
Telecommunications for the Deaf: Echoes of the Past--A Glimpse of the Future.
ERIC Educational Resources Information Center
Jensema, Carl J.
1994-01-01
This article traces developments in telephone and telecommunications technology from Alexander Graham Bell to the present, explaining technical and practical aspects of teletypewriters, fax machines, online information services, electronic mail, video telephones, relay systems, teleconferencing, video telephones, and speech recognition.…
Duncan, James R; Kline, Benjamin; Glaiberman, Craig B
2007-04-01
To create and test methods of extracting efficiency data from recordings of simulated renal stent procedures. Task analysis was performed and used to design a standardized testing protocol. Five experienced angiographers then performed 16 renal stent simulations using the Simbionix AngioMentor angiographic simulator. Audio and video recordings of these simulations were captured from multiple vantage points. The recordings were synchronized and compiled. A series of efficiency metrics (procedure time, contrast volume, and tool use) were then extracted from the recordings. The intraobserver and interobserver variability of these individual metrics was also assessed. The metrics were converted to costs and aggregated to determine the fixed and variable costs of a procedure segment or the entire procedure. Task analysis and pilot testing led to a standardized testing protocol suitable for performance assessment. Task analysis also identified seven checkpoints that divided the renal stent simulations into six segments. Efficiency metrics for these different segments were extracted from the recordings and showed excellent intra- and interobserver correlations. Analysis of the individual and aggregated efficiency metrics demonstrated large differences between segments as well as between different angiographers. These differences persisted when efficiency was expressed as either total or variable costs. Task analysis facilitated both protocol development and data analysis. Efficiency metrics were readily extracted from recordings of simulated procedures. Aggregating the metrics and dividing the procedure into segments revealed potential insights that could be easily overlooked because the simulator currently does not attempt to aggregate the metrics and only provides data derived from the entire procedure. The data indicate that analysis of simulated angiographic procedures will be a powerful method of assessing performance in interventional radiology.
Markerless identification of key events in gait cycle using image flow.
Vishnoi, Nalini; Duric, Zoran; Gerber, Naomi Lynn
2012-01-01
Gait analysis has been an interesting area of research for several decades. In this paper, we propose image-flow-based methods to compute the motion and velocities of different body segments automatically, using a single inexpensive video camera. We then identify and extract different events of the gait cycle (double-support, mid-swing, toe-off and heel-strike) from video images. Experiments were conducted in which four walking subjects were captured from the sagittal plane. Automatic segmentation was performed to isolate the moving body from the background. The head excursion and the shank motion were then computed to identify the key frames corresponding to different events in the gait cycle. Our approach does not require calibrated cameras or special markers to capture movement. We have also compared our method with the Optotrak 3D motion capture system and found our results in good agreement with the Optotrak results. The development of our method has potential use in the markerless and unencumbered video capture of human locomotion. Monitoring gait in homes and communities provides a useful application for the aged and the disabled. Our method could potentially be used as an assessment tool to determine gait symmetry or to establish the normal gait pattern of an individual.
NASA Astrophysics Data System (ADS)
Babic, Z.; Pilipovic, R.; Risojevic, V.; Mirjanic, G.
2016-06-01
Honey bees have crucial role in pollination across the world. This paper presents a simple, non-invasive, system for pollen bearing honey bee detection in surveillance video obtained at the entrance of a hive. The proposed system can be used as a part of a more complex system for tracking and counting of honey bees with remote pollination monitoring as a final goal. The proposed method is executed in real time on embedded systems co-located with a hive. Background subtraction, color segmentation and morphology methods are used for segmentation of honey bees. Classification in two classes, pollen bearing honey bees and honey bees that do not have pollen load, is performed using nearest mean classifier, with a simple descriptor consisting of color variance and eccentricity features. On in-house data set we achieved correct classification rate of 88.7% with 50 training images per class. We show that the obtained classification results are not far behind from the results of state-of-the-art image classification methods. That favors the proposed method, particularly having in mind that real time video transmission to remote high performance computing workstation is still an issue, and transfer of obtained parameters of pollination process is much easier.
Characterizing popularity dynamics of online videos
NASA Astrophysics Data System (ADS)
Ren, Zhuo-Ming; Shi, Yu-Qiang; Liao, Hao
2016-07-01
Online popularity has a major impact on videos, music, news and other contexts in online systems. Characterizing online popularity dynamics is nature to explain the observed properties in terms of the already acquired popularity of each individual. In this paper, we provide a quantitative, large scale, temporal analysis of the popularity dynamics in two online video-provided websites, namely MovieLens and Netflix. The two collected data sets contain over 100 million records and even span a decade. We characterize that the popularity dynamics of online videos evolve over time, and find that the dynamics of the online video popularity can be characterized by the burst behaviors, typically occurring in the early life span of a video, and later restricting to the classic preferential popularity increase mechanism.
Music video viewing as a marker of driving after the consumption of alcohol.
Beullens, Kathleen; Roe, Keith; Van den Bulck, Jan
2012-01-01
This study has two main objectives. First, it is examined whether the frequent exposure to music video viewing is associated with driving after the consumption of alcohol. Second, it is examined which theoretical framework, a combination of Cultivation Theory and the Theory of Planned Behavior or the Problem Behavior Theory, is suited best to explain this relationship. Participants were 426 Flemish adolescents who took part in a two-wave panel survey (2006-2008) about media use, risk-taking attitudes, intentions, and behaviors. In line with Cultivation Theory and the Theory of Planned Behavior, the results showed that adolescents' music video viewing is a significant marker of later risky driving behavior and that this relationship is mediated through their attitudes and intentions. No support was found for the hypothesis that music video viewing is part of a cluster of problem behaviors (Problem Behavior Theory). Thus, the results of this study seem to indicate that a combination of Cultivation Theory and the Theory of Planned Behavior provides a more useful framework for explaining the relationship between music video viewing and driving after the consumption of alcohol. The implications for prevention and the study's limitations are discussed.
Automated content and quality assessment of full-motion-video for the generation of meta data
NASA Astrophysics Data System (ADS)
Harguess, Josh
2015-05-01
Virtually all of the video data (and full-motion-video (FMV)) that is currently collected and stored in support of missions has been corrupted to various extents by image acquisition and compression artifacts. Additionally, video collected by wide-area motion imagery (WAMI) surveillance systems and unmanned aerial vehicles (UAVs) and similar sources is often of low quality or in other ways corrupted so that it is not worth storing or analyzing. In order to make progress in the problem of automatic video analysis, the first problem that should be solved is deciding whether the content of the video is even worth analyzing to begin with. We present a work in progress to address three types of scenes which are typically found in real-world data stored in support of Department of Defense (DoD) missions: no or very little motion in the scene, large occlusions in the scene, and fast camera motion. Each of these produce video that is generally not usable to an analyst or automated algorithm for mission support and therefore should be removed or flagged to the user as such. We utilize recent computer vision advances in motion detection and optical flow to automatically assess FMV for the identification and generation of meta-data (or tagging) of video segments which exhibit unwanted scenarios as described above. Results are shown on representative real-world video data.
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Bryan, ThomasC. (Inventor); Book, Michael L. (Inventor)
2004-01-01
A method and system for processing an image including capturing an image and storing the image as image pixel data. Each image pixel datum is stored in a respective memory location having a corresponding address. Threshold pixel data is selected from the image pixel data and linear spot segments are identified from the threshold pixel data selected.. Ihe positions of only a first pixel and a last pixel for each linear segment are saved. Movement of one or more objects are tracked by comparing the positions of fust and last pixels of a linear segment present in the captured image with respective first and last pixel positions in subsequent captured images. Alternatively, additional data for each linear data segment is saved such as sum of pixels and the weighted sum of pixels i.e., each threshold pixel value is multiplied by that pixel's x-location).
Surgical gesture classification from video and kinematic data.
Zappella, Luca; Béjar, Benjamín; Hager, Gregory; Vidal, René
2013-10-01
Much of the existing work on automatic classification of gestures and skill in robotic surgery is based on dynamic cues (e.g., time to completion, speed, forces, torque) or kinematic data (e.g., robot trajectories and velocities). While videos could be equally or more discriminative (e.g., videos contain semantic information not present in kinematic data), they are typically not used because of the difficulties associated with automatic video interpretation. In this paper, we propose several methods for automatic surgical gesture classification from video data. We assume that the video of a surgical task (e.g., suturing) has been segmented into video clips corresponding to a single gesture (e.g., grabbing the needle, passing the needle) and propose three methods to classify the gesture of each video clip. In the first one, we model each video clip as the output of a linear dynamical system (LDS) and use metrics in the space of LDSs to classify new video clips. In the second one, we use spatio-temporal features extracted from each video clip to learn a dictionary of spatio-temporal words, and use a bag-of-features (BoF) approach to classify new video clips. In the third one, we use multiple kernel learning (MKL) to combine the LDS and BoF approaches. Since the LDS approach is also applicable to kinematic data, we also use MKL to combine both types of data in order to exploit their complementarity. Our experiments on a typical surgical training setup show that methods based on video data perform equally well, if not better, than state-of-the-art approaches based on kinematic data. In turn, the combination of both kinematic and video data outperforms any other algorithm based on one type of data alone. Copyright © 2013 Elsevier B.V. All rights reserved.
Qualitative Shadowing as a Research Methodology for Exploring Early Childhood Leadership in Practice
ERIC Educational Resources Information Center
Bøe, Marit; Hognestad, Karin; Waniganayake, Manjula
2017-01-01
This article explores qualitative shadowing as an interpretivist methodology, and explains how two researchers participating simultaneously in data collection using a video recorder, contextual interviews and video-stimulated recall interviews, conducted a qualitative shadowing study at six early childhood centres in Norway. This paper emerged…
Lennarson, P J; Smith, D W; Sawin, P D; Todd, M M; Sato, Y; Traynelis, V C
2001-04-01
The purpose of this study was to characterize and compare segmental cervical motion during orotracheal intubation in cadavers with and without a complete subaxial injury, as well as to examine the efficacy of commonly used stabilization techniques in limiting that motion. Intubation procedures were performed in 10 fresh human cadavers in which cervical spines were intact and following the creation of a complete C4-5 ligamentous injury. Movement of the cervical spine during direct laryngoscopy and intubation was recorded using video fluoroscopy and examined under the following conditions: 1) without stabilization; 2) with manual in-line cervical immobilization; and 3) with Gardner-Wells traction. Subsequently, segmental angular rotation, subluxation, and distraction at the injured C4-5 level were measured from digitized frames of the recorded video fluoroscopy. After complete C4-5 destabilization, the effects of attempted stabilization on distraction, angulation, and subluxation were analyzed. Immobilization effectively eliminated distraction, and diminished angulation, but increased subluxation. Traction significantly increased distraction, but decreased angular rotation and effectively eliminated subluxation. Orotracheal intubation without stabilization had intermediate results, causing less distraction than traction, less subluxation than immobilization, but increased angulation compared with either intervention. These results are discussed in terms of both statistical and clinical significance and recommendations are made.
System and process for detecting and monitoring surface defects
NASA Technical Reports Server (NTRS)
Mueller, Mark K. (Inventor)
1994-01-01
A system and process for detecting and monitoring defects in large surfaces such as the field joints of the container segments of a space shuttle booster motor. Beams of semi-collimated light from three non-parallel fiber optic light panels are directed at a region of the surface at non-normal angles of expected incidence. A video camera gathers some portion of the light that is reflected at an angle other than the angle of expected reflectance, and generates signals which are analyzed to discern defects in the surface. The analysis may be performed by visual inspection of an image on a video monitor, or by inspection of filtered or otherwise processed images. In one alternative embodiment, successive predetermined regions of the surface are aligned with the light source before illumination, thereby permitting efficient detection of defects in a large surface. Such alignment is performed by using a line scan gauge to sense the light which passes through an aperture in the surface. In another embodiment a digital map of the surface is created, thereby permitting the maintenance of records detailing changes in the location or size of defects as the container segment is refurbished and re-used. The defect detection apparatus may also be advantageously mounted on a fixture which engages the edge of a container segment.
Probabilistic fusion of stereo with color and contrast for bilayer segmentation.
Kolmogorov, Vladimir; Criminisi, Antonio; Blake, Andrew; Cross, Geoffrey; Rother, Carsten
2006-09-01
This paper describes models and algorithms for the real-time segmentation of foreground from background layers in stereo video sequences. Automatic separation of layers from color/contrast or from stereo alone is known to be error-prone. Here, color, contrast, and stereo matching information are fused to infer layers accurately and efficiently. The first algorithm, Layered Dynamic Programming (LDP), solves stereo in an extended six-state space that represents both foreground/background layers and occluded regions. The stereo-match likelihood is then fused with a contrast-sensitive color model that is learned on-the-fly and stereo disparities are obtained by dynamic programming. The second algorithm, Layered Graph Cut (LGC), does not directly solve stereo. Instead, the stereo match likelihood is marginalized over disparities to evaluate foreground and background hypotheses and then fused with a contrast-sensitive color model like the one used in LDP. Segmentation is solved efficiently by ternary graph cut. Both algorithms are evaluated with respect to ground truth data and found to have similar performance, substantially better than either stereo or color/ contrast alone. However, their characteristics with respect to computational efficiency are rather different. The algorithms are demonstrated in the application of background substitution and shown to give good quality composite video output.
Pondering the procephalon: the segmental origin of the labrum.
Haas, M S; Brown, S J; Beeman, R W
2001-02-01
With accumulating evidence for the appendicular nature of the labrum, the question of its actual segmental origin remains. Two existing insect head segmentation models, the linear and S-models, are reviewed, and a new model introduced. The L-/Bent-Y model proposes that the labrum is a fusion of the appendage endites of the intercalary segment and that the stomodeum is tightly integrated into this segment. This model appears to explain a wider variety of insect head segmentation phenomena. Embryological, histological, neurological and molecular evidence supporting the new model is reviewed.
Single-incision video-assisted thoracoscopic surgery left-lower lobe anterior segmentectomy (S8)
Lirio, Francisco; Sesma, Julio; Baschwitz, Benno; Bolufer, Sergio
2017-01-01
Unusual anatomical segmentectomies are technically demanding procedures that require a deep knowledge of intralobar anatomy and surgical skill. In the other hand, these procedures preserve more normal lung parenchyma for lesions located in specific anatomical segments, and are indicated for benign lesions, metastasis and also early stage adenocarcinomas without nodal involvement. A 32-year-old woman was diagnosed of a benign pneumocytoma in the anterior segment of the left-lower lobe (S8, LLL), so we performed a single-incision video-assisted thoracoscopic surgery (SI-VATS) anatomical S8 segmentectomy in 140 minutes under intercostal block. There were no intraoperative neither postoperative complications, the chest tube was removed at 24 hours and the patient discharged at 5th postoperative day with low pain on the visual analogue scale (VAS). Final pathologic exam reported a benign sclerosant pneumocytoma with free margins. The patient has recovered her normal activities at 3 months completely with radiological normal controls at 1 and 3 months. PMID:29078674
Single-incision video-assisted thoracoscopic surgery left-lower lobe anterior segmentectomy (S8).
Galvez, Carlos; Lirio, Francisco; Sesma, Julio; Baschwitz, Benno; Bolufer, Sergio
2017-01-01
Unusual anatomical segmentectomies are technically demanding procedures that require a deep knowledge of intralobar anatomy and surgical skill. In the other hand, these procedures preserve more normal lung parenchyma for lesions located in specific anatomical segments, and are indicated for benign lesions, metastasis and also early stage adenocarcinomas without nodal involvement. A 32-year-old woman was diagnosed of a benign pneumocytoma in the anterior segment of the left-lower lobe (S8, LLL), so we performed a single-incision video-assisted thoracoscopic surgery (SI-VATS) anatomical S8 segmentectomy in 140 minutes under intercostal block. There were no intraoperative neither postoperative complications, the chest tube was removed at 24 hours and the patient discharged at 5 th postoperative day with low pain on the visual analogue scale (VAS). Final pathologic exam reported a benign sclerosant pneumocytoma with free margins. The patient has recovered her normal activities at 3 months completely with radiological normal controls at 1 and 3 months.
Shojaedini, Seyed Vahab; Heydari, Masoud
2014-10-01
Shape and movement features of sperms are important parameters for infertility study and treatment. In this article, a new method is introduced for characterization sperms in microscopic videos. In this method, first a hypothesis framework is defined to distinguish sperms from other particles in captured video. Then decision about each hypothesis is done in following steps: Selecting some primary regions as candidates for sperms by watershed-based segmentation, pruning of some false candidates during successive frames using graph theory concept and finally confirming correct sperms by using their movement trajectories. Performance of the proposed method is evaluated on real captured images belongs to semen with high density of sperms. The obtained results show the proposed method may detect 97% of sperms in presence of 5% false detections and track 91% of moving sperms. Furthermore, it can be shown that better characterization of sperms in proposed algorithm doesn't lead to extracting more false sperms compared to some present approaches.
Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system
NASA Astrophysics Data System (ADS)
Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen
2016-05-01
Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.
An optimized video system for augmented reality in endodontics: a feasibility study.
Bruellmann, D D; Tjaden, H; Schwanecke, U; Barth, P
2013-03-01
We propose an augmented reality system for the reliable detection of root canals in video sequences based on a k-nearest neighbor color classification and introduce a simple geometric criterion for teeth. The new software was implemented using C++, Qt, and the image processing library OpenCV. Teeth are detected in video images to restrict the segmentation of the root canal orifices by using a k-nearest neighbor algorithm. The location of the root canal orifices were determined using Euclidean distance-based image segmentation. A set of 126 human teeth with known and verified locations of the root canal orifices was used for evaluation. The software detects root canals orifices for automatic classification of the teeth in video images and stores location and size of the found structures. Overall 287 of 305 root canals were correctly detected. The overall sensitivity was about 94 %. Classification accuracy for molars ranged from 65.0 to 81.2 % and from 85.7 to 96.7 % for premolars. The realized software shows that observations made in anatomical studies can be exploited to automate real-time detection of root canal orifices and tooth classification with a software system. Automatic storage of location, size, and orientation of the found structures with this software can be used for future anatomical studies. Thus, statistical tables with canal locations will be derived, which can improve anatomical knowledge of the teeth to alleviate root canal detection in the future. For this purpose the software is freely available at: http://www.dental-imaging.zahnmedizin.uni-mainz.de/.
Transportability and Presence as Predictors of Avatar Identification Within Narrative Video Games.
Christy, Katheryn R; Fox, Jesse
2016-04-01
To understand how narratives may best be implemented in video game design, first we must understand how players respond to and experience narratives in video games, including their reactions to their player character or avatar. This study looks at the relationship that transportability, self-presence, social presence, and physical presence have with identification with one's avatar. Survey data from 302 participants (151 males, 151 females) were analyzed. Both transportability and self-presence explained a significant amount of variance in avatar identification. We discuss the implications of these findings for future narrative video game research.
Student Satisfaction With Blackboard-Style Videos.
Wolf, Andrew B; Peyre, Sarah E
2018-04-19
Blackboard-style videos with simple drawings illustrating concepts have become immensely popular in recent years. However, there has been no published research evaluating their efficacy in nursing education. This pilot study evaluates the use of blackboard-style videos in an online pathophysiology course. Quantitative and qualitative evaluation data were analyzed to evaluate student satisfaction. The data indicated that students were highly satisfied with the course and the delivery of content using blackboard-style videos. The qualitative analysis uncovered two key themes explaining the high level of satisfaction: visual plus narrative explanations support learning and student control over pacing enhances learning.
The Quest for Contact: NASA's Search for Extraterrestrial Intelligence
NASA Technical Reports Server (NTRS)
1992-01-01
This video details the history and current efforts of NASA's Search for Extraterrestrial Intelligence program. The video explains the use of radiotelescopes to monitor electromagnetic frequencies reaching the Earth, and the analysis of this data for patterns or signals that have no natural origin. The video presents an overview of Frank Drake's 1960 'Ozma' experiment, the current META experiment, and planned efforts incorporating an international Deep Space Network of radiotelescopes that will be trained on over 800 stars.
Borgonovi, Francesca
2016-04-01
Video games are a favorite leisure-time activity among teenagers worldwide. This study examines cross-national gender differences in reading achievement and video gaming and whether video gaming explains gender differences in reading achievement and differences in performance between paper-based and computer-based reading. We use data from a representative sample of 145,953 students from 26 countries who sat the PISA 2012 assessments and provided self-reports on use of video games. Although boys tend to have poorer results in both the computer-based and the paper-based reading assessments, boys' under achievement is smaller when the assessment is delivered on computer than when it is delivered on paper. Boys underperformance compared to girls in the two reading assessments is particularly pronounced among low-achieving students. Among both boys and girls moderate use of single-player games is associated with a performance advantage. However, frequent engagement with collaborative online games is generally associated with a steep reduction in achievement, particularly in the paper-based test and particularly among low-achieving students. Excessive gaming may hinder academic achievement, but moderate gaming can promote positive student outcomes. In many countries video gaming explains the difference in the gender gap in reading between the paper-based and the computer-based assessments. Copyright © 2016 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
Annotations of Mexican bullfighting videos for semantic index
NASA Astrophysics Data System (ADS)
Montoya Obeso, Abraham; Oropesa Morales, Lester Arturo; Fernando Vázquez, Luis; Cocolán Almeda, Sara Ivonne; Stoian, Andrei; García Vázquez, Mireya Saraí; Zamudio Fuentes, Luis Miguel; Montiel Perez, Jesús Yalja; de la O Torres, Saul; Ramírez Acosta, Alejandro Alvaro
2015-09-01
The video annotation is important for web indexing and browsing systems. Indeed, in order to evaluate the performance of video query and mining techniques, databases with concept annotations are required. Therefore, it is necessary generate a database with a semantic indexing that represents the digital content of the Mexican bullfighting atmosphere. This paper proposes a scheme to make complex annotations in a video in the frame of multimedia search engine project. Each video is partitioned using our segmentation algorithm that creates shots of different length and different number of frames. In order to make complex annotations about the video, we use ELAN software. The annotations are done in two steps: First, we take note about the whole content in each shot. Second, we describe the actions as parameters of the camera like direction, position and deepness. As a consequence, we obtain a more complete descriptor of every action. In both cases we use the concepts of the TRECVid 2014 dataset. We also propose new concepts. This methodology allows to generate a database with the necessary information to create descriptors and algorithms capable to detect actions to automatically index and classify new bullfighting multimedia content.
Automatic summarization of soccer highlights using audio-visual descriptors.
Raventós, A; Quijada, R; Torres, Luis; Tarrés, Francesc
2015-01-01
Automatic summarization generation of sports video content has been object of great interest for many years. Although semantic descriptions techniques have been proposed, many of the approaches still rely on low-level video descriptors that render quite limited results due to the complexity of the problem and to the low capability of the descriptors to represent semantic content. In this paper, a new approach for automatic highlights summarization generation of soccer videos using audio-visual descriptors is presented. The approach is based on the segmentation of the video sequence into shots that will be further analyzed to determine its relevance and interest. Of special interest in the approach is the use of the audio information that provides additional robustness to the overall performance of the summarization system. For every video shot a set of low and mid level audio-visual descriptors are computed and lately adequately combined in order to obtain different relevance measures based on empirical knowledge rules. The final summary is generated by selecting those shots with highest interest according to the specifications of the user and the results of relevance measures. A variety of results are presented with real soccer video sequences that prove the validity of the approach.
Using learning analytics to evaluate a video-based lecture series.
Lau, K H Vincent; Farooque, Pue; Leydon, Gary; Schwartz, Michael L; Sadler, R Mark; Moeller, Jeremy J
2018-01-01
The video-based lecture (VBL), an important component of the flipped classroom (FC) and massive open online course (MOOC) approaches to medical education, has primarily been evaluated through direct learner feedback. Evaluation may be enhanced through learner analytics (LA) - analysis of quantitative audience usage data generated by video-sharing platforms. We applied LA to an experimental series of ten VBLs on electroencephalography (EEG) interpretation, uploaded to YouTube in the model of a publicly accessible MOOC. Trends in view count; total percentage of video viewed and audience retention (AR) (percentage of viewers watching at a time point compared to the initial total) were examined. The pattern of average AR decline was characterized using regression analysis, revealing a uniform linear decline in viewership for each video, with no evidence of an optimal VBL length. Segments with transient increases in AR corresponded to those focused on core concepts, indicative of content requiring more detailed evaluation. We propose a model for applying LA at four levels: global, series, video, and feedback. LA may be a useful tool in evaluating a VBL series. Our proposed model combines analytics data and learner self-report for comprehensive evaluation.
A generic flexible and robust approach for intelligent real-time video-surveillance systems
NASA Astrophysics Data System (ADS)
Desurmont, Xavier; Delaigle, Jean-Francois; Bastide, Arnaud; Macq, Benoit
2004-05-01
In this article we present a generic, flexible and robust approach for an intelligent real-time video-surveillance system. A previous version of the system was presented in [1]. The goal of these advanced tools is to provide help to operators by detecting events of interest in visual scenes and highlighting alarms and compute statistics. The proposed system is a multi-camera platform able to handle different standards of video inputs (composite, IP, IEEE1394 ) and which can basically compress (MPEG4), store and display them. This platform also integrates advanced video analysis tools, such as motion detection, segmentation, tracking and interpretation. The design of the architecture is optimised to playback, display, and process video flows in an efficient way for video-surveillance application. The implementation is distributed on a scalable computer cluster based on Linux and IP network. It relies on POSIX threads for multitasking scheduling. Data flows are transmitted between the different modules using multicast technology and under control of a TCP-based command network (e.g. for bandwidth occupation control). We report here some results and we show the potential use of such a flexible system in third generation video surveillance system. We illustrate the interest of the system in a real case study, which is the indoor surveillance.
Fukuchi, Reginaldo K; Duarte, Marcos
2008-11-01
The objective of this study was to compare the three-dimensional lower extremity running kinematics of young adult runners and elderly runners. Seventeen elderly adults (age 67-73 years) and 17 young adults (age 26-36 years) ran at 3.1 m x s(-1) on a treadmill while the movements of the lower extremity during the stance phase were recorded at 120 Hz using three-dimensional video. The three-dimensional kinematics of the lower limb segments and of the ankle and knee joints were determined, and selected variables were calculated to describe the movement. Our results suggest that elderly runners have a different movement pattern of the lower extremity from that of young adults during the stance phase of running. Compared with the young adults, the elderly runners had a substantial decrease in stride length (1.97 vs. 2.23 m; P = 0.01), an increase in stride frequency (1.58 vs. 1.37 Hz; P = 0.002), less knee flexion/extension range of motion (26 vs. 33 degrees ; P = 0.002), less tibial internal/external rotation range of motion (9 vs. 12 degrees ; P < 0.001), larger external rotation angle of the foot segment (toe-out angle) at the heel strike (-5.8 vs. -1.0 degrees ; P = 0.009), and greater asynchronies between the ankle and knee movements during running. These results may help to explain why elderly individuals could be more susceptible to running-related injuries.
SAFE: Stopping AIDS through Functional Education.
ERIC Educational Resources Information Center
Hylton, Judith
This functional curriculum is intended to teach people with developmental disabilities or other learning problems how to prevent infection with HIV/AIDS (Human Immunodeficiency Virus/Acquired Immune Deficiency Syndrome). The entire curriculum includes six video segments, four illustrated brochures, 28 slides and illustrations, as well as a guide…
ERIC Educational Resources Information Center
Rubin, Joan; And Others
This set of materials include an interactive videotape and textbook program (tape not included here) for high-beginning and intermediate English-as-a-Second-Language (ESL) students in or about to enter the workplace. The materials provide instruction in communication skills essential for job success. The 10 video segments and corresponding student…
Science, Mathematics, and the Mimi.
ERIC Educational Resources Information Center
Doblmeier, Joyce; Fields, Barbara
1996-01-01
Students with difficulty in maintaining grade-level performance at the Model Secondary School for the Deaf (Washington, DC) are learning mathematics and science skills using "The Voyage of the Mimi," a 13-segment video series and associated educational materials that detail a scientific expedition which is studying humpback whales. Team…
Video sensor architecture for surveillance applications.
Sánchez, Jordi; Benet, Ginés; Simó, José E
2012-01-01
This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software) in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%.
Video Sensor Architecture for Surveillance Applications
Sánchez, Jordi; Benet, Ginés; Simó, José E.
2012-01-01
This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software) in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%. PMID:22438723
Common Issues in World Regions: A Video Series.
ERIC Educational Resources Information Center
Becker, James
1992-01-01
Describes a video series that offers information on the impact of current world problems on family life. Explains that the programs illustrate the five geographic themes by comparing the experiences of young people in North America and Western Europe. Suggests that the series helps teenagers see how the same problems affect families in different…
Intentional forgetting diminishes memory for continuous events.
Fawcett, Jonathan M; Taylor, Tracy L; Nadel, Lynn
2013-01-01
In a novel event method directed forgetting task, instructions to Remember (R) or Forget (F) were integrated throughout the presentation of four videos depicting common events (e.g., baking cookies). Participants responded more accurately to cued recall questions (E1) and true/false statements (E2-4) regarding R segments than F segments. This was true even when forced to attend to F segments by virtue of having to perform concurrent discrimination (E2) or conceptual segmentation (E3) tasks. The final experiment (E5) demonstrated a larger R >F difference for specific true/false statements (the woman added three cups of flour) than for general true/false statements (the woman added flour) suggesting that participants likely encoded and retained at least a general representation of the events they had intended to forget, even though this representation was not as specific as the representation of events they had intended to remember.
Wen, Nainan; Chia, Stella C; Hao, Xiaoming
2015-01-01
This study examines portrayals of cosmetic surgery on YouTube, where we found a substantial number of cosmetic surgery videos. Most of the videos came from cosmetic surgeons who appeared to be aggressively using social media in their practices. Except for videos that explained cosmetic surgery procedures, most videos in our sample emphasized the benefits of cosmetic surgery, and only a small number of the videos addressed the involved risks. We also found that tactics of persuasive communication-namely, related to message source and message sensation value (MSV)-have been used in Web-based social media to attract viewers' attention and interests. Expert sources were used predominantly, although typical-consumer sources tended to generate greater viewer interest in cosmetic surgery than other types of message sources. High MSV, moreover, was found to increase a video's popularity.
Efficient region-based approach for blotch detection in archived video using texture information
NASA Astrophysics Data System (ADS)
Yous, Hamza; Serir, Amina
2017-03-01
We propose a method for blotch detection in archived videos by modeling their spatiotemporal properties. We introduce an adaptive spatiotemporal segmentation to extract candidate regions that can be classified as blotches. Then, the similarity between the preselected regions and their corresponding motion-compensated regions in the adjacent frames is assessed by means of motion trajectory estimation and textural information analysis. Perceived ground truth based on just noticeable contrast is employed for the evaluation of our approach against the state-of-the-art, and the reported results show a better performance for our approach.
NASA Astrophysics Data System (ADS)
Onley, David; Steinberg, Gary
2004-04-01
The consequences of the Special Theory of Relativity are explored in a virtual world in which the speed of light is only 10 m/s. Ray tracing software and other visualization tools, modified to allow for the finite speed of light, are employed to create a video that brings to life a journey through this imaginary world. The aberation of light, the Doppler effect, the altered perception of time and power of incoming radiation are explored in separate segments of this 35 min video. Several of the effects observed are new and quite unexpected. A commentary and animated explanations help keep the viewer from losing all perspective.
A goal bias in action: The boundaries adults perceive in events align with sites of actor intent.
Levine, Dani; Hirsh-Pasek, Kathy; Pace, Amy; Michnick Golinkoff, Roberta
2017-06-01
We live in a dynamic world comprised of continuous events. Remembering our past and predicting future events, however, requires that we segment these ongoing streams of information in a consistent manner. How is this segmentation achieved? This research examines whether the boundaries adults perceive in events, such as the Olympic figure skating routine used in these studies, align with the beginnings (sources) and endings (goals) of human goal-directed actions. Study 1 showed that a group of experts, given an explicit task with unlimited time to rewatch the event, identified the same subevents as one another, but with greater agreement as to the timing of goals than sources. In Study 2, experts, novices familiarized with the figure skating sequence, and unfamiliarized novices performed an online event segmentation task, marking boundaries as the video progressed in real time. The online boundaries of all groups corresponded with the sources and goals offered by Study 1's experts, with greater alignment of goals than sources. Additionally, expertise, but not mere perceptual familiarity, boosted the alignment of sources and goals. Finally, Study 3, which presented novices with the video played in reverse, indicated, unexpectedly, that even when spatiotemporal cues were disrupted, viewers' perceived event boundaries still aligned with their perception of the actors' intended sources and goals. This research extends the goal bias to event segmentation, and suggests that our spontaneous sensitivity toward goals may allow us to transform even relatively complex and unfamiliar event streams into structured and meaningful representations. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Perioperative outcomes of video- and robot-assisted segmentectomies.
Rinieri, Philippe; Peillon, Christophe; Salaün, Mathieu; Mahieu, Julien; Bubenheim, Michael; Baste, Jean-Marc
2016-02-01
Video-assisted thoracic surgery appears to be technically difficult for segmentectomy. Conversely, robotic surgery could facilitate the performance of segmentectomy. The aim of this study was to compare the early results of video- and robot-assisted segmentectomies. Data were collected prospectively on videothoracoscopy from 2010 and on robotic procedures from 2013. Fifty-one patients who were candidates for minimally invasive segmentectomy were included in the study. Perioperative outcomes of video-assisted and robotic segmentectomies were compared. The minimally invasive segmentectomies included 32 video- and 16 robot-assisted procedures; 3 segmentectomies (2 video-assisted and 1 robot-assisted) were converted to lobectomies. Four conversions to thoracotomy were necessary for anatomical reason or arterial injury, with no uncontrolled bleeding in the robotic arm. There were 7 benign or infectious lesions, 9 pre-invasive lesions, 25 lung cancers, and 10 metastatic diseases. Patient characteristics, type of segment, conversion to thoracotomy, conversion to lobectomy, operative time, postoperative complications, chest tube duration, postoperative stay, and histology were similar in the video and robot groups. Estimated blood loss was significantly higher in the video group (100 vs. 50 mL, p = 0.028). The morbidity rate of minimally invasive segmentectomy was low. The short-term results of video-assisted and robot-assisted segmentectomies were similar, and more data are required to show any advantages between the two techniques. Long-term oncologic outcomes are necessary to evaluate these new surgical practices. © The Author(s) 2016.
Automated Visual Event Detection, Tracking, and Data Management System for Cabled- Observatory Video
NASA Astrophysics Data System (ADS)
Edgington, D. R.; Cline, D. E.; Schlining, B.; Raymond, E.
2008-12-01
Ocean observatories and underwater video surveys have the potential to unlock important discoveries with new and existing camera systems. Yet the burden of video management and analysis often requires reducing the amount of video recorded through time-lapse video or similar methods. It's unknown how many digitized video data sets exist in the oceanographic community, but we suspect that many remain under analyzed due to lack of good tools or human resources to analyze the video. To help address this problem, the Automated Visual Event Detection (AVED) software and The Video Annotation and Reference System (VARS) have been under development at MBARI. For detecting interesting events in the video, the AVED software has been developed over the last 5 years. AVED is based on a neuromorphic-selective attention algorithm, modeled on the human vision system. Frames are decomposed into specific feature maps that are combined into a unique saliency map. This saliency map is then scanned to determine the most salient locations. The candidate salient locations are then segmented from the scene using algorithms suitable for the low, non-uniform light and marine snow typical of deep underwater video. For managing the AVED descriptions of the video, the VARS system provides an interface and database for describing, viewing, and cataloging the video. VARS was developed by the MBARI for annotating deep-sea video data and is currently being used to describe over 3000 dives by our remotely operated vehicles (ROV), making it well suited to this deepwater observatory application with only a few modifications. To meet the compute and data intensive job of video processing, a distributed heterogeneous network of computers is managed using the Condor workload management system. This system manages data storage, video transcoding, and AVED processing. Looking to the future, we see high-speed networks and Grid technology as an important element in addressing the problem of processing and accessing large video data sets.
Using video playbacks to study visual communication in a marine fish, Salaria pavo.
Gonçalves; Oliveira; Körner; Poschadel; Schlupp
2000-09-01
Video playbacks have been successfully applied to the study of visual communication in several groups of animals. However, this technique is controversial as video monitors are designed with the human visual system in mind. Differences between the visual capabilities of humans and other animals will lead to perceptually different interpretations of video images. We simultaneously presented males and females of the peacock blenny, Salaria pavo, with a live conspecific male and an online video image of the same individual. Video images failed to elicit appropriate responses. Males were aggressive towards the live male but not towards video images of the same male. Similarly, females courted only the live male and spent more time near this stimulus. In contrast, females of the gynogenetic poecilid Poecilia formosa showed an equal preference for a live and video image of a P. mexicana male, suggesting a response to live animals as strong as to video images. We discuss differences between the species that may explain their opposite reaction to video images. Copyright 2000 The Association for the Study of Animal Behaviour.
Perspective Taking Promotes Action Understanding and Learning
ERIC Educational Resources Information Center
Lozano, Sandra C.; Martin Hard, Bridgette; Tversky, Barbara
2006-01-01
People often learn actions by watching others. The authors propose and test the hypothesis that perspective taking promotes encoding a hierarchical representation of an actor's goals and subgoals-a key process for observational learning. Observers segmented videos of an object assembly task into coarse and fine action units. They described what…
ERIC Educational Resources Information Center
Zlotlow, Susan F.; Allen, George J.
1981-01-01
Assessed the validity of examining the influence of counselors' physical attractiveness via observation of videotapes. Reactions to audio-only and video-only videotape segments were compared with in vivo contact. In vivo contact yielded more positive impressions than videotape observations. Technical skill was more predictive of counselor…
Mechanisms in cardiovascular diseases: how useful are medical textbooks, eMedicine, and YouTube?
2014-01-01
The aim of this study was to assess the contents of medical textbooks, eMedicine (Medscape) topics, and YouTube videos on cardiovascular mechanisms. Medical textbooks, eMedicine articles, and YouTube were searched for cardiovascular mechanisms. Using appraisal forms, copies of these resources and videos were evaluated independently by three assessors. Most textbooks were brief in explaining mechanisms. Although the overall average percentage committed to cardiovascular mechanisms in physiology textbooks (n = 7) was 16.1% and pathology textbooks (n = 4) was 17.5%, there was less emphasis on mechanisms in most internal medicine textbooks (n = 6), with a total average of 6.9%. In addition, flow diagrams explaining mechanisms were lacking. However, eMedicine topics (n = 48) discussed mechanisms adequately in 22.9% (11 of 48) topics, and the percentage of content allocated to cardiovascular mechanisms was higher (15.8%, 46.2 of 292) compared with that of any internal medicine textbooks. Only 29 YouTube videos fulfilled the inclusion criteria. Of these, 16 YouTube were educationally useful, scoring 14.1 ± 0.5 (mean ± SD). The remaining 13 videos were not educationally useful, scoring 6.1 ± 1.7. The concordance between the assessors on applying the criteria measured by κ score was in the range of 0.55–0.96. In conclusion, despite the importance of mechanisms, most textbooks and YouTube videos were deficient in cardiovascular mechanisms. eMedicine topics discussed cardiovascular mechanisms for some diseases, but there were no flow diagrams or multimedia explaining mechanisms. These deficiencies in learning resources could add to the challenges faced by students in understanding cardiovascular mechanisms. PMID:25039083
Mechanisms in cardiovascular diseases: how useful are medical textbooks, eMedicine, and YouTube?
Azer, Samy A
2014-06-01
The aim of this study was to assess the contents of medical textbooks, eMedicine (Medscape) topics, and YouTube videos on cardiovascular mechanisms. Medical textbooks, eMedicine articles, and YouTube were searched for cardiovascular mechanisms. Using appraisal forms, copies of these resources and videos were evaluated independently by three assessors. Most textbooks were brief in explaining mechanisms. Although the overall average percentage committed to cardiovascular mechanisms in physiology textbooks (n=7) was 16.1% and pathology textbooks (n=4) was 17.5%, there was less emphasis on mechanisms in most internal medicine textbooks (n=6), with a total average of 6.9%. In addition, flow diagrams explaining mechanisms were lacking. However, eMedicine topics (n=48) discussed mechanisms adequately in 22.9% (11 of 48) topics, and the percentage of content allocated to cardiovascular mechanisms was higher (15.8%, 46.2 of 292) compared with that of any internal medicine textbooks. Only 29 YouTube videos fulfilled the inclusion criteria. Of these, 16 YouTube were educationally useful, scoring 14.1 ± 0.5 (mean ± SD). The remaining 13 videos were not educationally useful, scoring 6.1 ± 1.7. The concordance between the assessors on applying the criteria measured by κ score was in the range of 0.55–0.96. In conclusion, despite the importance of mechanisms, most textbooks and You-Tube videos were deficient in cardiovascular mechanisms. eMedicine topics discussed cardiovascular mechanisms for some diseases, but there were no flow diagrams or multimedia explaining mechanisms. These deficiencies in learning resources could add to the challenges faced by students in understanding cardiovascular mechanisms.
Physical activity patterns across time-segmented youth sport flag football practice.
Schlechter, Chelsey R; Guagliano, Justin M; Rosenkranz, Richard R; Milliken, George A; Dzewaltowski, David A
2018-02-08
Youth sport (YS) reaches a large number of children world-wide and contributes substantially to children's daily physical activity (PA), yet less than half of YS time has been shown to be spent in moderate-to-vigorous physical activity (MVPA). Physical activity during practice is likely to vary depending on practice structure that changes across YS time, therefore the purpose of this study was 1) to describe the type and frequency of segments of time, defined by contextual characteristics of practice structure, during YS practices and 2) determine the influence of these segments on PA. Research assistants video-recorded the full duration of 28 practices from 14 boys' flag football teams (2 practices/team) while children concurrently (N = 111, aged 5-11 years, mean 7.9 ± 1.2 years) wore ActiGraph GT1M accelerometers to measure PA. Observers divided videos of each practice into continuous context time segments (N = 204; mean-segments-per-practice = 7.3, SD = 2.5) using start/stop points defined by change in context characteristics, and assigned a value for task (e.g., management, gameplay, etc.), member arrangement (e.g., small group, whole group, etc.), and setting demand (i.e., fosters participation, fosters exclusion). Segments were then paired with accelerometer data. Data were analyzed using a multilevel model with segment as unit of analysis. Whole practices averaged 34 ± 2.4% of time spent in MVPA. Free-play (51.5 ± 5.5%), gameplay (53.6 ± 3.7%), and warm-up (53.9 ± 3.6%) segments had greater percentage of time (%time) in MVPA compared to fitness (36.8 ± 4.4%) segments (p ≤ .01). Greater %time was spent in MVPA during free-play segments compared to scrimmage (30.2 ± 4.6%), strategy (30.6 ± 3.2%), and sport-skill (31.6 ± 3.1%) segments (p ≤ .01), and in segments that fostered participation (36.1 ± 2.7%) than segments that fostered exclusion (29.1 ± 3.0%; p ≤ .01). Significantly greater %time was spent in low-energy stationary behavior in fitness (15.7 ± 3.4%) than gameplay (4.0 ± 2.9%) segments (p ≤ .01), and in sport-skill (17.6 ± 2.2%) than free-play (8.2 ± 4.2%), gameplay, and warm-up (10.6 ± 2.6%) segments (p < .05). The %time spent in low-energy stationary behavior and in MVPA differed by characteristics of task and setting demand of the segment. Restructuring the routine of YS practice to include segments conducive to MVPA could increase %time spent in MVPA during practice. As YS reaches a large number of children worldwide, increasing PA during YS has the potential to create a public health impact.
How does a woodpecker work? An impact dynamics approach
NASA Astrophysics Data System (ADS)
Liu, Yuzhe; Qiu, Xinming; Yu, Tongxi; Tao, Jiawei; Cheng, Ze
2015-04-01
To understand how a woodpecker is able accelerate its head to such a high velocity in a short amount of time, a multi-rigid-segment model of a woodpecker's body is established in this study. Based on the skeletal specimen of the woodpecker and several videos of woodpeckers pecking, the parameters of a three-degree-of-freedom system are determined. The high velocity of the head is found to be the result of a whipping effect, which could be affected by muscle torque and tendon stiffness. The mechanism of whipping is analyzed by comparing the response of a hinged rod to that of a rigid rod. Depending on the parameters, the dynamic behavior of a hinged rod is classified into three response modes. Of these, a high free-end velocity could be achieved in mode II. The model is then generalized to a multihinge condition, and the free-end velocity is found to increase with hinge number, which explains the high free-end velocity resulting from whipping. Furthermore, the effects of some other factors, such as damping and mass distribution, on the velocity are also discussed.
Pedestrian detection based on redundant wavelet transform
NASA Astrophysics Data System (ADS)
Huang, Lin; Ji, Liping; Hu, Ping; Yang, Tiejun
2016-10-01
Intelligent video surveillance is to analysis video or image sequences captured by a fixed or mobile surveillance camera, including moving object detection, segmentation and recognition. By using it, we can be notified immediately in an abnormal situation. Pedestrian detection plays an important role in an intelligent video surveillance system, and it is also a key technology in the field of intelligent vehicle. So pedestrian detection has very vital significance in traffic management optimization, security early warn and abnormal behavior detection. Generally, pedestrian detection can be summarized as: first to estimate moving areas; then to extract features of region of interest; finally to classify using a classifier. Redundant wavelet transform (RWT) overcomes the deficiency of shift variant of discrete wavelet transform, and it has better performance in motion estimation when compared to discrete wavelet transform. Addressing the problem of the detection of multi-pedestrian with different speed, we present an algorithm of pedestrian detection based on motion estimation using RWT, combining histogram of oriented gradients (HOG) and support vector machine (SVM). Firstly, three intensities of movement (IoM) are estimated using RWT and the corresponding areas are segmented. According to the different IoM, a region proposal (RP) is generated. Then, the features of a RP is extracted using HOG. Finally, the features are fed into a SVM trained by pedestrian databases and the final detection results are gained. Experiments show that the proposed algorithm can detect pedestrians accurately and efficiently.
An algorithm for calculi segmentation on ureteroscopic images.
Rosa, Benoît; Mozer, Pierre; Szewczyk, Jérôme
2011-03-01
The purpose of the study is to develop an algorithm for the segmentation of renal calculi on ureteroscopic images. In fact, renal calculi are common source of urological obstruction, and laser lithotripsy during ureteroscopy is a possible therapy. A laser-based system to sweep the calculus surface and vaporize it was developed to automate a very tedious manual task. The distal tip of the ureteroscope is directed using image guidance, and this operation is not possible without an efficient segmentation of renal calculi on the ureteroscopic images. We proposed and developed a region growing algorithm to segment renal calculi on ureteroscopic images. Using real video images to compute ground truth and compare our segmentation with a reference segmentation, we computed statistics on different image metrics, such as Precision, Recall, and Yasnoff Measure, for comparison with ground truth. The algorithm and its parameters were established for the most likely clinical scenarii. The segmentation results are encouraging: the developed algorithm was able to correctly detect more than 90% of the surface of the calculi, according to an expert observer. Implementation of an algorithm for the segmentation of calculi on ureteroscopic images is feasible. The next step is the integration of our algorithm in the command scheme of a motorized system to build a complete operating prototype.
Robotic Arm Comprising Two Bending Segments
NASA Technical Reports Server (NTRS)
Mehling, Joshua S.; Difler, Myron A.; Ambrose, Robert O.; Chu, Mars W.; Valvo, Michael C.
2010-01-01
The figure shows several aspects of an experimental robotic manipulator that includes a housing from which protrudes a tendril- or tentacle-like arm 1 cm thick and 1 m long. The arm consists of two collinear segments, each of which can be bent independently of the other, and the two segments can be bent simultaneously in different planes. The arm can be retracted to a minimum length or extended by any desired amount up to its full length. The arm can also be made to rotate about its own longitudinal axis. Some prior experimental robotic manipulators include single-segment bendable arms. Those arms are thicker and shorter than the present one. The present robotic manipulator serves as a prototype of future manipulators that, by virtue of the slenderness and multiple- bending capability of their arms, are expected to have sufficient dexterity for operation within spaces that would otherwise be inaccessible. Such manipulators could be especially well suited as means of minimally invasive inspection during construction and maintenance activities. Each of the two collinear bending arm segments is further subdivided into a series of collinear extension- and compression-type helical springs joined by threaded links. The extension springs occupy the majority of the length of the arm and engage passively in bending. The compression springs are used for actively controlled bending. Bending is effected by means of pairs of antagonistic tendons in the form of spectra gel spun polymer lines that are attached at specific threaded links and run the entire length of the arm inside the spring helix from the attachment links to motor-driven pulleys inside the housing. Two pairs of tendons, mounted in orthogonal planes that intersect along the longitudinal axis, are used to effect bending of each segment. The tendons for actuating the distal bending segment are in planes offset by an angle of 45 from those of the proximal bending segment: This configuration makes it possible to accommodate all eight tendons at the same diameter along the arm. The threaded links have central bores through which power and video wires can be strung (1) from a charge-coupled-device camera mounted on the tip of the arms (2) back along the interior of the arm into the housing and then (3) from within the housing to an external video monitor.
Assessment of Fall Characteristics From Depth Sensor Videos.
O'Connor, Jennifer J; Phillips, Lorraine J; Folarinde, Bunmi; Alexander, Gregory L; Rantz, Marilyn
2017-07-01
Falls are a major source of death and disability in older adults; little data, however, are available about the etiology of falls in community-dwelling older adults. Sensor systems installed in independent and assisted living residences of 105 older adults participating in an ongoing technology study were programmed to record live videos of probable fall events. Sixty-four fall video segments from 19 individuals were viewed and rated using the Falls Video Assessment Questionnaire. Raters identified that 56% (n = 36) of falls were due to an incorrect shift of body weight and 27% (n = 17) from losing support of an external object, such as an unlocked wheelchair or rolling walker. In 60% of falls, mobility aids were in the room or in use at the time of the fall. Use of environmentally embedded sensors provides a mechanism for real-time fall detection and, ultimately, may supply information to clinicians for fall prevention interventions. [Journal of Gerontological Nursing, 43(7), 13-19.]. Copyright 2017, SLACK Incorporated.
Audio-based queries for video retrieval over Java enabled mobile devices
NASA Astrophysics Data System (ADS)
Ahmad, Iftikhar; Cheikh, Faouzi Alaya; Kiranyaz, Serkan; Gabbouj, Moncef
2006-02-01
In this paper we propose a generic framework for efficient retrieval of audiovisual media based on its audio content. This framework is implemented in a client-server architecture where the client application is developed in Java to be platform independent whereas the server application is implemented for the PC platform. The client application adapts to the characteristics of the mobile device where it runs such as screen size and commands. The entire framework is designed to take advantage of the high-level segmentation and classification of audio content to improve speed and accuracy of audio-based media retrieval. Therefore, the primary objective of this framework is to provide an adaptive basis for performing efficient video retrieval operations based on the audio content and types (i.e. speech, music, fuzzy and silence). Experimental results approve that such an audio based video retrieval scheme can be used from mobile devices to search and retrieve video clips efficiently over wireless networks.
A new visual navigation system for exploring biomedical Open Educational Resource (OER) videos
Zhao, Baoquan; Xu, Songhua; Lin, Shujin; Luo, Xiaonan; Duan, Lian
2016-01-01
Objective Biomedical videos as open educational resources (OERs) are increasingly proliferating on the Internet. Unfortunately, seeking personally valuable content from among the vast corpus of quality yet diverse OER videos is nontrivial due to limitations of today’s keyword- and content-based video retrieval techniques. To address this need, this study introduces a novel visual navigation system that facilitates users’ information seeking from biomedical OER videos in mass quantity by interactively offering visual and textual navigational clues that are both semantically revealing and user-friendly. Materials and Methods The authors collected and processed around 25 000 YouTube videos, which collectively last for a total length of about 4000 h, in the broad field of biomedical sciences for our experiment. For each video, its semantic clues are first extracted automatically through computationally analyzing audio and visual signals, as well as text either accompanying or embedded in the video. These extracted clues are subsequently stored in a metadata database and indexed by a high-performance text search engine. During the online retrieval stage, the system renders video search results as dynamic web pages using a JavaScript library that allows users to interactively and intuitively explore video content both efficiently and effectively. Results The authors produced a prototype implementation of the proposed system, which is publicly accessible at https://patentq.njit.edu/oer. To examine the overall advantage of the proposed system for exploring biomedical OER videos, the authors further conducted a user study of a modest scale. The study results encouragingly demonstrate the functional effectiveness and user-friendliness of the new system for facilitating information seeking from and content exploration among massive biomedical OER videos. Conclusion Using the proposed tool, users can efficiently and effectively find videos of interest, precisely locate video segments delivering personally valuable information, as well as intuitively and conveniently preview essential content of a single or a collection of videos. PMID:26335986
L'Uso dei Materiali Video nei Test Linguistici (The Use of Video Materials in Language Tests).
ERIC Educational Resources Information Center
Diadori, Pierangela
1995-01-01
This article argues that a communicative language course must have communicative exams. It explains how to choose and use material to test students' listening comprehension and socio-cultural knowledge. Transcripts of a commercial, a talk show, a film, a TV news show, and a documentary are included accompanied by exercises. (CFM)
ERIC Educational Resources Information Center
Raaijmakers, Steven F.; Baars, Martine; Schaap, Lydia; Paas, Fred; van Merriënboer, Jeroen; van Gog, Tamara
2018-01-01
Self-assessment and task-selection skills are crucial in self-regulated learning situations in which students can choose their own tasks. Prior research suggested that training with video modeling examples, in which another person (the model) demonstrates and explains the cyclical process of problem-solving task performance, self-assessment, and…
Unraveling "Braid": Puzzle Games and Storytelling in the Imperative Mood
ERIC Educational Resources Information Center
Arnott, Luke
2012-01-01
"Unraveling Braid" analyzes how unconventional, non-linear narrative fiction can help explain the ways in which video games signify. Specifically, this essay looks at the links between the semiotic features of Jonathan Blow's 2008 puzzle-platform video game Braid and similar elements in Georges Perec's 1978 novel "Life A User's Manual," as well as…
What We Are Learning about How the Brain Learns-Implications for the Use of Video in the Classroom.
ERIC Educational Resources Information Center
Davidson, Tom; McKenzie, Barbara K.
2000-01-01
Describes empirical research in the fields of neurology and cognitive science that is being conducted to determine how and why the brain learns. Explains ways that video is compatible with how the brain learns and suggests it should be used more extensively by teachers and library media specialists. (LRW)
ERIC Educational Resources Information Center
Forbes, Cory; Lange, Kim; Möller, Kornelia; Biggers, Mandy; Laux, Mira; Zangori, Laura
2014-01-01
To help explain the differences in students' performance on internationally administered science assessments, cross-national, video-based observational studies have been advocated, but none have yet been conducted at the elementary level for science. The USA and Germany are two countries with large formal education systems whose students…
Study of Temporal Effects on Subjective Video Quality of Experience.
Bampis, Christos George; Zhi Li; Moorthy, Anush Krishna; Katsavounidis, Ioannis; Aaron, Anne; Bovik, Alan Conrad
2017-11-01
HTTP adaptive streaming is being increasingly deployed by network content providers, such as Netflix and YouTube. By dividing video content into data chunks encoded at different bitrates, a client is able to request the appropriate bitrate for the segment to be played next based on the estimated network conditions. However, this can introduce a number of impairments, including compression artifacts and rebuffering events, which can severely impact an end-user's quality of experience (QoE). We have recently created a new video quality database, which simulates a typical video streaming application, using long video sequences and interesting Netflix content. Going beyond previous efforts, the new database contains highly diverse and contemporary content, and it includes the subjective opinions of a sizable number of human subjects regarding the effects on QoE of both rebuffering and compression distortions. We observed that rebuffering is always obvious and unpleasant to subjects, while bitrate changes may be less obvious due to content-related dependencies. Transient bitrate drops were preferable over rebuffering only on low complexity video content, while consistently low bitrates were poorly tolerated. We evaluated different objective video quality assessment algorithms on our database and found that objective video quality models are unreliable for QoE prediction on videos suffering from both rebuffering events and bitrate changes. This implies the need for more general QoE models that take into account objective quality models, rebuffering-aware information, and memory. The publicly available video content as well as metadata for all of the videos in the new database can be found at http://live.ece.utexas.edu/research/LIVE_NFLXStudy/nflx_index.html.
2016-01-01
Passive content fingerprinting is widely used for video content identification and monitoring. However, many challenges remain unsolved especially for partial-copies detection. The main challenge is to find the right balance between the computational cost of fingerprint extraction and fingerprint dimension, without compromising detection performance against various attacks (robustness). Fast video detection performance is desirable in several modern applications, for instance, in those where video detection involves the use of large video databases or in applications requiring real-time video detection of partial copies, a process whose difficulty increases when videos suffer severe transformations. In this context, conventional fingerprinting methods are not fully suitable to cope with the attacks and transformations mentioned before, either because the robustness of these methods is not enough or because their execution time is very high, where the time bottleneck is commonly found in the fingerprint extraction and matching operations. Motivated by these issues, in this work we propose a content fingerprinting method based on the extraction of a set of independent binary global and local fingerprints. Although these features are robust against common video transformations, their combination is more discriminant against severe video transformations such as signal processing attacks, geometric transformations and temporal and spatial desynchronization. Additionally, we use an efficient multilevel filtering system accelerating the processes of fingerprint extraction and matching. This multilevel filtering system helps to rapidly identify potential similar video copies upon which the fingerprint process is carried out only, thus saving computational time. We tested with datasets of real copied videos, and the results show how our method outperforms state-of-the-art methods regarding detection scores. Furthermore, the granularity of our method makes it suitable for partial-copy detection; that is, by processing only short segments of 1 second length. PMID:27861492
Satellite switched FDMA advanced communication technology satellite program
NASA Technical Reports Server (NTRS)
Atwood, S.; Higton, G. H.; Wood, K.; Kline, A.; Furiga, A.; Rausch, M.; Jan, Y.
1982-01-01
The satellite switched frequency division multiple access system provided a detailed system architecture that supports a point to point communication system for long haul voice, video and data traffic between small Earth terminals at Ka band frequencies at 30/20 GHz. A detailed system design is presented for the space segment, small terminal/trunking segment at network control segment for domestic traffic model A or B, each totaling 3.8 Gb/s of small terminal traffic and 6.2 Gb/s trunk traffic. The small terminal traffic (3.8 Gb/s) is emphasized, for the satellite router portion of the system design, which is a composite of thousands of Earth stations with digital traffic ranging from a single 32 Kb/s CVSD voice channel to thousands of channels containing voice, video and data with a data rate as high as 33 Mb/s. The system design concept presented, effectively optimizes a unique frequency and channelization plan for both traffic models A and B with minimum reorganization of the satellite payload transponder subsystem hardware design. The unique zoning concept allows multiple beam antennas while maximizing multiple carrier frequency reuse. Detailed hardware design estimates for an FDMA router (part of the satellite transponder subsystem) indicate a weight and dc power budget of 353 lbs, 195 watts for traffic model A and 498 lbs, 244 watts for traffic model B.
Statistical modelling of subdiffusive dynamics in the cytoplasm of living cells: A FARIMA approach
NASA Astrophysics Data System (ADS)
Burnecki, K.; Muszkieta, M.; Sikora, G.; Weron, A.
2012-04-01
Golding and Cox (Phys. Rev. Lett., 96 (2006) 098102) tracked the motion of individual fluorescently labelled mRNA molecules inside live E. coli cells. They found that in the set of 23 trajectories from 3 different experiments, the automatically recognized motion is subdiffusive and published an intriguing microscopy video. Here, we extract the corresponding time series from this video by image segmentation method and present its detailed statistical analysis. We find that this trajectory was not included in the data set already studied and has different statistical properties. It is best fitted by a fractional autoregressive integrated moving average (FARIMA) process with the normal-inverse Gaussian (NIG) noise and the negative memory. In contrast to earlier studies, this shows that the fractional Brownian motion is not the best model for the dynamics documented in this video.
de Chantal, Marilyn; Diodati, Jean G; Nasmith, James B; Amyot, Robert; LeBlanc, A Robert; Schampaert, Erick; Pharand, Chantal
2006-12-01
ST-segment depression is commonly seen in patients with acute coronary syndromes. Most authors have attributed it to transient reductions in coronary blood flow due to nonocclusive thrombus formation on a disrupted atherosclerotic plaque and dynamic focal vasospasm at the site of coronary artery stenosis. However, ST-segment depression was never reproduced in classic animal models of coronary stenosis without the presence of tachycardia. We hypothesized that ST-segment depression occurring during acute coronary syndromes is not entirely explained by changes in epicardial coronary artery resistance and thus evaluated the effect of a slow, progressive epicardial coronary artery occlusion on the ECG and regional myocardial blood flow in anesthetized pigs. Slow, progressive occlusion over 72 min (SD 27) of the left anterior descending coronary artery in 20 anesthetized pigs led to a 90% decrease in coronary blood flow and the development of ST-segment elevation associated with homogeneous and transmural myocardial blood flow reductions, confirmed by microspheres and myocardial contrast echocardiography. ST-segment depression was not observed in any ECG lead before the development of ST-segment elevation. At normal heart rates, progressive epicardial stenosis of a coronary artery results in myocardial ischemia associated with homogeneous, transmural reduction in regional myocardial blood flow and ST-segment elevation, without preceding ST-segment depression. Thus, in coronary syndromes with ST-segment depression and predominant subendocardial ischemia, factors other than mere increases in epicardial coronary resistance must be invoked to explain the heterogeneous parietal distribution of flow and associated ECG changes.
Co-occurrence statistics as a language-dependent cue for speech segmentation.
Saksida, Amanda; Langus, Alan; Nespor, Marina
2017-05-01
To what extent can language acquisition be explained in terms of different associative learning mechanisms? It has been hypothesized that distributional regularities in spoken languages are strong enough to elicit statistical learning about dependencies among speech units. Distributional regularities could be a useful cue for word learning even without rich language-specific knowledge. However, it is not clear how strong and reliable the distributional cues are that humans might use to segment speech. We investigate cross-linguistic viability of different statistical learning strategies by analyzing child-directed speech corpora from nine languages and by modeling possible statistics-based speech segmentations. We show that languages vary as to which statistical segmentation strategies are most successful. The variability of the results can be partially explained by systematic differences between languages, such as rhythmical differences. The results confirm previous findings that different statistical learning strategies are successful in different languages and suggest that infants may have to primarily rely on non-statistical cues when they begin their process of speech segmentation. © 2016 John Wiley & Sons Ltd.
Gena, Angeliki; Couloura, Sophia; Kymissis, Effie
2005-10-01
The purpose of this study was to modify the affective behavior of three preschoolers with autism in home settings and in the context of play activities, and to compare the effects of video modeling to the effects of in-vivo modeling in teaching these children contextually appropriate affective responses. A multiple-baseline design across subjects, with a return to baseline condition, was used to assess the effects of treatment that consisted of reinforcement, video modeling, in-vivo modeling, and prompting. During training trials, reinforcement in the form of verbal praise and tokens was delivered contingent upon appropriate affective responding. Error correction procedures differed for each treatment condition. In the in-vivo modeling condition, the therapist used modeling and verbal prompting. In the video modeling condition, video segments of a peer modeling the correct response and verbal prompting by the therapist were used as corrective procedures. Participants received treatment in three categories of affective behavior--sympathy, appreciation, and disapproval--and were presented with a total of 140 different scenarios. The study demonstrated that both treatments--video modeling and in-vivo modeling--systematically increased appropriate affective responding in all response categories for the three participants. Additionally, treatment effects generalized across responses to untrained scenarios, the child's mother, new therapists, and time.
Action Spotting and Recognition Based on a Spatiotemporal Orientation Analysis.
Derpanis, Konstantinos G; Sizintsev, Mikhail; Cannons, Kevin J; Wildes, Richard P
2013-03-01
This paper provides a unified framework for the interrelated topics of action spotting, the spatiotemporal detection and localization of human actions in video, and action recognition, the classification of a given video into one of several predefined categories. A novel compact local descriptor of video dynamics in the context of action spotting and recognition is introduced based on visual spacetime oriented energy measurements. This descriptor is efficiently computed directly from raw image intensity data and thereby forgoes the problems typically associated with flow-based features. Importantly, the descriptor allows for the comparison of the underlying dynamics of two spacetime video segments irrespective of spatial appearance, such as differences induced by clothing, and with robustness to clutter. An associated similarity measure is introduced that admits efficient exhaustive search for an action template, derived from a single exemplar video, across candidate video sequences. The general approach presented for action spotting and recognition is amenable to efficient implementation, which is deemed critical for many important applications. For action spotting, details of a real-time GPU-based instantiation of the proposed approach are provided. Empirical evaluation of both action spotting and action recognition on challenging datasets suggests the efficacy of the proposed approach, with state-of-the-art performance documented on standard datasets.
Foreign Language Students' Conversational Negotiations in Different Task Environments
ERIC Educational Resources Information Center
Hardy, Ilonca M.; Moore, Joyce L.
2004-01-01
This study examined the effect of structural and content characteristics of language tasks on foreign language learners' conversational negotiations. In a 2x2 Greco-Latin square design, degree of structural support of language tasks, students' degree of familiarity with German video segments, and task order were varied. Twenty-eight pairs of…
Hubble Identifies Source of Ultraviolet Light in an Old Galaxy
NASA Technical Reports Server (NTRS)
2000-01-01
This videotape is comprised of four segments: (1) a Video zoom in on galaxy M32 using ground images, (2) Hubble images of galaxy M32, (3) Ground base color image of galaxies M31 and M32, and (4) Black and white ground based images of galaxy M32.
Automatic Online Lecture Highlighting Based on Multimedia Analysis
ERIC Educational Resources Information Center
Che, Xiaoyin; Yang, Haojin; Meinel, Christoph
2018-01-01
Textbook highlighting is widely considered to be beneficial for students. In this paper, we propose a comprehensive solution to highlight the online lecture videos in both sentence- and segment-level, just as is done with paper books. The solution is based on automatic analysis of multimedia lecture materials, such as speeches, transcripts, and…
Affect Response to Simulated Information Attack during Complex Task Performance
2014-12-02
AND FRUSTRATION ........................ 42 FIGURE 27. TASK LOAD INDEX OF MENTAL DEMAND, TEMPORAL DEMAND, AND PHYSICAL DEMAND...situational awareness, affect, and trait characteristics interact with human performance during cyberspace attacks in the physical and information...Operator state was manipulated using emotional stimulation portrayed through the presentation of video segments. The effect of emotions on
Faces of Homelessness: A Teacher's Guide.
ERIC Educational Resources Information Center
Massachusetts State Dept. of Education, Quincy.
A brief teacher's guide supplements a videotape of two 15-minute segments on homelessness. The stated objective of the video is to cover the issues of homelessness as they exist today and to dispel the stereotypes of homelessness leftover from earlier eras. A family which has found itself homeless is introduced and then aspects of the phenomenon…
MILE Curriculum [and Nine CD-ROM Lessons].
ERIC Educational Resources Information Center
Reiman, John
This curriculum on money management skills for deaf adolescent and young adult students is presented on nine video CD-ROMs as well as in a print version. The curriculum was developed following a survey of the needs of school and rehabilitation programs. It was also piloted and subsequently revised. Each teaching segment is presented in sign…
ERIC Educational Resources Information Center
Jones, Rachel; Hall, Sara White; Thigpen, Kamila; Murray, Tom; Loschert, Kristen
2015-01-01
This report demonstrates how one predominantly low-income school district dramatically improved student engagement in the classroom and increased high school graduation rates through project-based learning (PBL) and the effective use of technology. The report, which includes short video segments with educators and students, focuses on Talladega…
Zhang, Lei; Zeng, Zhi; Ji, Qiang
2011-09-01
Chain graph (CG) is a hybrid probabilistic graphical model (PGM) capable of modeling heterogeneous relationships among random variables. So far, however, its application in image and video analysis is very limited due to lack of principled learning and inference methods for a CG of general topology. To overcome this limitation, we introduce methods to extend the conventional chain-like CG model to CG model with more general topology and the associated methods for learning and inference in such a general CG model. Specifically, we propose techniques to systematically construct a generally structured CG, to parameterize this model, to derive its joint probability distribution, to perform joint parameter learning, and to perform probabilistic inference in this model. To demonstrate the utility of such an extended CG, we apply it to two challenging image and video analysis problems: human activity recognition and image segmentation. The experimental results show improved performance of the extended CG model over the conventional directed or undirected PGMs. This study demonstrates the promise of the extended CG for effective modeling and inference of complex real-world problems.
Resolving occlusion and segmentation errors in multiple video object tracking
NASA Astrophysics Data System (ADS)
Cheng, Hsu-Yung; Hwang, Jenq-Neng
2009-02-01
In this work, we propose a method to integrate the Kalman filter and adaptive particle sampling for multiple video object tracking. The proposed framework is able to detect occlusion and segmentation error cases and perform adaptive particle sampling for accurate measurement selection. Compared with traditional particle filter based tracking methods, the proposed method generates particles only when necessary. With the concept of adaptive particle sampling, we can avoid degeneracy problem because the sampling position and range are dynamically determined by parameters that are updated by Kalman filters. There is no need to spend time on processing particles with very small weights. The adaptive appearance for the occluded object refers to the prediction results of Kalman filters to determine the region that should be updated and avoids the problem of using inadequate information to update the appearance under occlusion cases. The experimental results have shown that a small number of particles are sufficient to achieve high positioning and scaling accuracy. Also, the employment of adaptive appearance substantially improves the positioning and scaling accuracy on the tracking results.
Lip reading using neural networks
NASA Astrophysics Data System (ADS)
Kalbande, Dhananjay; Mishra, Akassh A.; Patil, Sanjivani; Nirgudkar, Sneha; Patel, Prashant
2011-10-01
Computerized lip reading, or speech reading, is concerned with the difficult task of converting a video signal of a speaking person to written text. It has several applications like teaching deaf and dumb to speak and communicate effectively with the other people, its crime fighting potential and invariance to acoustic environment. We convert the video of the subject speaking vowels into images and then images are further selected manually for processing. However, several factors like fast speech, bad pronunciation, and poor illumination, movement of face, moustaches and beards make lip reading difficult. Contour tracking methods and Template matching are used for the extraction of lips from the face. K Nearest Neighbor algorithm is then used to classify the 'speaking' images and the 'silent' images. The sequence of images is then transformed into segments of utterances. Feature vector is calculated on each frame for all the segments and is stored in the database with properly labeled class. Character recognition is performed using modified KNN algorithm which assigns more weight to nearer neighbors. This paper reports the recognition of vowels using KNN algorithms
Off- and Along-Axis Slow Spreading Ridge Segment Characters: Insights From 3d Thermal Modeling
NASA Astrophysics Data System (ADS)
Gac, S.; Tisseau, C.; Dyment, J.
2001-12-01
Many observations along the Mid-Atlantic Ridge segments suggest a correlation between surface characters (length, axial morphology) and the thermal state of the segment. Thibaud et al. (1998) classify segments according to their thermal state: "colder" segments shorter than 30 km show a weak magmatic activity, and "hotter" segments as long as 90 km show a robust magmatic activity. The existence of such a correlation suggests that the thermal structure of a slow spreading ridge segment explains most of the surface observations. Here we test the physical coherence of such an integrated thermal model and evaluate it quantitatively. The different kinds of segment would constitute different phases in a segment evolution, the segment evolving progressively from a "colder" to a "hotter" so to a "colder" state. Here we test the consistency of such an evolution scheme. To test these hypotheses we have developed a 3D numerical model for the thermal structure and evolution of a slow spreading ridge segment. The thermal structure is controlled by the geometry and the dimensions of a permanently hot zone, imposed beneath the segment center, where is simulated the adiabatic ascent of magmatic material. To compare the model with the observations several geophysic quantities which depend on the thermal state are simulated: crustal thickness variations along axis, gravity anomalies (reflecting density variations) and earthquake maximum depth (corresponding to the 750° C isotherm depth). The thermal structure of a particular segment is constrained by comparing the simulated quantities to the real ones. Considering realistic magnetization parameters, the magnetic anomalies generated from the same thermal structure and evolution reproduce the observed magnetic anomaly amplitude variations along the segment. The thermal structures accounting for observations are determined for each kind of segment (from "colder" to "hotter"). The evolution of the thermal structure from the "colder" to the "hotter" segments gives credence to a temporal relationship between the different kinds of segment. The resulting thermal evolution model of slow spreading ridge segments may explain the rhomboedric shapes observed off-axis.
Chan, Linda; Mackintosh, Jeannie
2017-01-01
Background The National Collaborating Centre for Methods and Tools (NCCMT) offers workshops and webinars to build public health capacity for evidence-informed decision-making. Despite positive feedback for NCCMT workshops and resources, NCCMT users found key terms used in research papers difficult to understand. The Understanding Research Evidence (URE) videos use plain language, cartoon visuals, and public health examples to explain complex research concepts. The videos are posted on the NCCMT website and YouTube channel. Objective The first four videos in the URE web-based video series, which explained odds ratios (ORs), confidence intervals (CIs), clinical significance, and forest plots, were evaluated. The evaluation examined how the videos affected public health professionals’ practice. A mixed-methods approach was used to examine the delivery mode and the content of the videos. Specifically, the evaluation explored (1) whether the videos were effective at increasing knowledge on the four video topics, (2) whether public health professionals were satisfied with the videos, and (3) how public health professionals applied the knowledge gained from the videos in their work. Methods A three-part evaluation was conducted to determine the effectiveness of the first four URE videos. The evaluation included a Web-based survey, telephone interviews, and pretest and posttests, which evaluated public health professionals’ experience with the videos and how the videos affected their public health work. Participants were invited to participate in this evaluation through various open access, public health email lists, through informational flyers and posters at the Canadian Public Health Association (CPHA) conference, and through targeted recruitment to NCCMT’s network. Results In the Web-based surveys (n=46), participants achieved higher scores on the knowledge assessment questions from watching the OR (P=.04), CI (P=.04), and clinical significance (P=.05) videos but not the forest plot (P=.12) video, as compared with participants who had not watched the videos. The pretest and posttest (n=124) demonstrated that participants had a better understanding of forest plots (P<.001) and CIs (P<.001) after watching the videos. Due to small sample size numbers, there were insufficient pretest and posttest data to conduct meaningful analyses on the clinical significance and OR videos. Telephone interview participants (n=18) thought the videos’ use of animation, narration, and plain language was appropriate for people with different levels of understanding and learning styles. Participants felt that by increasing their understanding of research evidence, they could develop better interventions and design evaluations to measure the impact of public health initiatives. Conclusions Overall, the results of the evaluation showed that watching the videos resulted in an increase in knowledge, and participants had an overall positive experience with the URE videos. With increased competence in using the best available evidence, professionals are empowered to contribute to decisions that can improve health outcomes of communities. PMID:28958986
[Epilepsy and videogame: which physiopathological mechanisms to expect?].
Masnou, P; Nahum-Moscovoci, L
1999-04-01
Video games may induce epileptic seizures in some subjects. Most of them have photosensitive epilepsy. The triggering factors are multiple: characteristics of the softwares, effects of the electronic screen and interactivity. The wide diffusion of the video games explain the large number of descriptions of videogame induced seizures. Historical aspects and an analysis of the underlying mechanisms of videogame induced seizures are presented.
ERIC Educational Resources Information Center
Pratt, Sharon M.; Martin, Anita M.
2017-01-01
This pilot study explored two methods of eliciting beginning readers' verbalizations of their thinking when self-monitoring oral reading: video-stimulated recall and concurrent questioning. First and second graders (N = 11) were asked to explain their thinking about repetitions, attempts to self-correct, and successful self-corrects, in order to…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-10
... specifying the permissible scope and conduct of monitoring; and Be organized and carry out its business in a...-12 III. Review Log of Proposal: Log 1 24 CFR 3285--Alternative Foundation System Testing. Log 80 24...-fhydelarge.html ; Video explaining ASTM D6007: http://www.ntainc.com/video-fhyde.html . Log 81 24 CFR 3280...
Application of Core Science Concepts Using Digital Video: A "Hands-On" Laptop Approach
ERIC Educational Resources Information Center
Jarvinen, Michael Keith; Jarvinen, Lamis Zaher; Sheehan, Danielle N.
2012-01-01
Today's undergraduates are highly engaged in a variety of social media outlets. Given their comfort with technology, we wondered if we could use this phenomenon to teach science-related material. We asked students to use freeware to make a short video with text, images, and music as a way to explain scientific concepts that are traditionally…
ERIC Educational Resources Information Center
Mitchell, Peter; Parsons, Sarah; Leonard, Anne
2007-01-01
Six teenagers with Autistic Spectrum Disorders (ASDs) experienced a Virtual Environment (VE) of a cafe. They also watched three sets of videos of real cafe and buses and judged where they would sit and explained why. Half of the participants received their VE experience between the first and second sets of videos, and half experienced it between…
ERIC Educational Resources Information Center
Leopold, Marjorie
This program is a self-guided professional development experience that explains how to use multiple intelligences (MI) theory to improve teaching, learning, and achievement in elementary classrooms and schools. The program consists of one manual and six VHS videos, each of which corresponds to one of the six modules listed in the table of…
Modelling audiovisual integration of affect from videos and music.
Gao, Chuanji; Wedell, Douglas H; Kim, Jongwan; Weber, Christine E; Shinkareva, Svetlana V
2018-05-01
Two experiments examined how affective values from visual and auditory modalities are integrated. Experiment 1 paired music and videos drawn from three levels of valence while holding arousal constant. Experiment 2 included a parallel combination of three levels of arousal while holding valence constant. In each experiment, participants rated their affective states after unimodal and multimodal presentations. Experiment 1 revealed a congruency effect in which stimulus combinations of the same extreme valence resulted in more extreme state ratings than component stimuli presented in isolation. An interaction between music and video valence reflected the greater influence of negative affect. Video valence was found to have a significantly greater effect on combined ratings than music valence. The pattern of data was explained by a five parameter differential weight averaging model that attributed greater weight to the visual modality and increased weight with decreasing values of valence. Experiment 2 revealed a congruency effect only for high arousal combinations and no interaction effects. This pattern was explained by a three parameter constant weight averaging model with greater weight for the auditory modality and a very low arousal value for the initial state. These results demonstrate key differences in audiovisual integration between valence and arousal.
Managing Complex Medication Regimens.
Harvath, Theresa A; Lindauer, Allison; Sexson, Kathryn
2016-11-01
This article is the first in a series, Supporting Family Caregivers: No Longer Home Alone, published in collaboration with the AARP Foundation. Results of focus groups conducted as part of the AARP Foundation's No Longer Home Alone video project supported evidence that family caregivers aren't being given the information they need to manage the complex care regimens of their family members. This series of articles and accompanying videos aims to help nurses provide caregivers with the tools they need to manage their family member's medications. Each article explains the principles nurses should consider and reinforce with caregivers and is accompanied by a video for the caregiver to watch. The first video can be accessed at http://links.lww.com/AJN/A74.
A Conceptual Characterization of Online Videos Explaining Natural Selection
NASA Astrophysics Data System (ADS)
Bohlin, Gustav; Göransson, Andreas; Höst, Gunnar E.; Tibell, Lena A. E.
2017-11-01
Educational videos on the Internet comprise a vast and highly diverse source of information. Online search engines facilitate access to numerous videos claiming to explain natural selection, but little is known about the degree to which the video content match key evolutionary content identified as important in evolution education research. In this study, we therefore analyzed the content of 60 videos accessed through the Internet, using a criteria catalog with 38 operationalized variables derived from research literature. The variables were sorted into four categories: (a) key concepts (e.g. limited resources and inherited variation), (b) threshold concepts (abstract concepts with a transforming and integrative function), (c) misconceptions (e.g. that evolution is driven by need), and (d) organismal context (e.g. animal or plant). The results indicate that some concepts are frequently communicated, and certain taxa are commonly used to illustrate concepts, while others are seldom included. In addition, evolutionary phenomena at small temporal and spatial scales, such as subcellular processes, are rarely covered. Rather, the focus is on population-level events over time scales spanning years or longer. This is consistent with an observed lack of explanations regarding how randomly occurring mutations provide the basis for variation (and thus natural selection). The findings imply, among other things, that some components of natural selection warrant far more attention in biology teaching and science education research.
Focused Assessment with Sonography for Trauma in weightlessness: a feasibility study
NASA Technical Reports Server (NTRS)
Kirkpatrick, Andrew W.; Hamilton, Douglas R.; Nicolaou, Savvas; Sargsyan, Ashot E.; Campbell, Mark R.; Feiveson, Alan; Dulchavsky, Scott A.; Melton, Shannon; Beck, George; Dawson, David L.
2003-01-01
BACKGROUND: The Focused Assessment with Sonography for Trauma (FAST) examines for fluid in gravitationally dependent regions. There is no prior experience with this technique in weightlessness, such as on the International Space Station, where sonography is currently the only diagnostic imaging tool. STUDY DESIGN: A ground-based (1 g) porcine model for sonography was developed. We examined both the feasibility and the comparative performance of the FAST examination in parabolic flight. Sonographic detection and fluid behavior were evaluated in four animals during alternating weightlessness (0 g) and hypergravity (1.8 g) periods. During flight, boluses of fluid were incrementally introduced into the peritoneal cavity. Standardized sonographic windows were recorded. Postflight, the video recordings were divided into 169 20-second segments for subsequent interpretation by 12 blinded ultrasonography experts. Reviewers first decided whether a video segment was of sufficient diagnostic quality to analyze (determinate). Determinate segments were then analyzed as containing or not containing fluid. A probit regression model compared the probability of a positive fluid diagnosis to actual fluid levels (0 to 500 mL) under both 0-g and 1.8-g conditions. RESULTS: The in-flight sonographers found real-time scanning and interpretation technically similar to that of terrestrial conditions, as long as restraint was maintained. On blinded review, 80% of the recorded ultrasound segments were considered determinate. The best sensitivity for diagnosis in 0 g was found to be from the subhepatic space, with probability of a positive fluid diagnosis ranging from 9% (no fluid) to 51% (500 mL fluid). CONCLUSIONS: The FAST examination is technically feasible in weightlessness, and merits operational consideration for clinical contingencies in space.
Video segmentation for post-production
NASA Astrophysics Data System (ADS)
Wills, Ciaran
2001-12-01
Specialist post-production is an industry that has much to gain from the application of content-based video analysis techniques. However the types of material handled in specialist post-production, such as television commercials, pop music videos and special effects are quite different in nature from the typical broadcast material which many video analysis techniques are designed to work with; shots are short and highly dynamic, and the transitions are often novel or ambiguous. We address the problem of scene change detection and develop a new algorithm which tackles some of the common aspects of post-production material that cause difficulties for past algorithms, such as illumination changes and jump cuts. Operating in the compressed domain on Motion JPEG compressed video, our algorithm detects cuts and fades by analyzing each JPEG macroblock in the context of its temporal and spatial neighbors. Analyzing the DCT coefficients directly we can extract the mean color of a block and an approximate detail level. We can also perform an approximated cross-correlation between two blocks. The algorithm is part of a set of tools being developed to work with an automated asset management system designed specifically for use in post-production facilities.
Video bioinformatics analysis of human embryonic stem cell colony growth.
Lin, Sabrina; Fonteno, Shawn; Satish, Shruthi; Bhanu, Bir; Talbot, Prue
2010-05-20
Because video data are complex and are comprised of many images, mining information from video material is difficult to do without the aid of computer software. Video bioinformatics is a powerful quantitative approach for extracting spatio-temporal data from video images using computer software to perform dating mining and analysis. In this article, we introduce a video bioinformatics method for quantifying the growth of human embryonic stem cells (hESC) by analyzing time-lapse videos collected in a Nikon BioStation CT incubator equipped with a camera for video imaging. In our experiments, hESC colonies that were attached to Matrigel were filmed for 48 hours in the BioStation CT. To determine the rate of growth of these colonies, recipes were developed using CL-Quant software which enables users to extract various types of data from video images. To accurately evaluate colony growth, three recipes were created. The first segmented the image into the colony and background, the second enhanced the image to define colonies throughout the video sequence accurately, and the third measured the number of pixels in the colony over time. The three recipes were run in sequence on video data collected in a BioStation CT to analyze the rate of growth of individual hESC colonies over 48 hours. To verify the truthfulness of the CL-Quant recipes, the same data were analyzed manually using Adobe Photoshop software. When the data obtained using the CL-Quant recipes and Photoshop were compared, results were virtually identical, indicating the CL-Quant recipes were truthful. The method described here could be applied to any video data to measure growth rates of hESC or other cells that grow in colonies. In addition, other video bioinformatics recipes can be developed in the future for other cell processes such as migration, apoptosis, and cell adhesion.
Rice, Sean C; Higginbotham, Tina; Dean, Melanie J; Slaughter, James C; Yachimski, Patrick S; Obstein, Keith L
2016-11-01
Successful outpatient colonoscopy (CLS) depends on many factors including the quality of a patient's bowel preparation. Although education on consumption of the pre-CLS purgative can improve bowel preparation quality, no study has evaluated dietary education alone. We have created an educational video on pre-CLS dietary instructions to determine whether dietary education would improve outpatient bowel preparation quality. A prospective randomized, blinded, controlled study of patients undergoing outpatient CLS was performed. All patients received a 4 l polyethylene glycol-based split-dose bowel preparation and standard institutional pre-procedure instructions. Patients were then randomly assigned to an intervention arm or to a no intervention arm. A 4-min educational video detailing clear liquid diet restriction was made available to patients in the intervention arm, whereas those randomized to no intervention did not have access to the video. Patients randomized to the video were provided with the YouTube video link 48-72 h before CLS. An attending endoscopist blinded to randomization performed the CLS. Bowel preparation quality was scored using the Boston Bowel Preparation Scale (BBPS). Adequate preparation was defined as a BBPS total score of ≥6 with all segment scores ≥2. Wilcoxon rank-sum and Pearson's χ 2 -tests were performed to assess differences between groups. Ninety-two patients were randomized (video: n=42; control: n=50) with 47 total video views being tallied. There were no demographic differences between groups. There was no statistically significant difference in adequate preparation between groups (video=74%; control=68%; P=0.54). The availability of a supplementary patient educational video on clear liquid diet alone was insufficient to improve bowel preparation quality when compared with standard pre-procedure instruction at our institution.
Scalable gastroscopic video summarization via similar-inhibition dictionary selection.
Wang, Shuai; Cong, Yang; Cao, Jun; Yang, Yunsheng; Tang, Yandong; Zhao, Huaici; Yu, Haibin
2016-01-01
This paper aims at developing an automated gastroscopic video summarization algorithm to assist clinicians to more effectively go through the abnormal contents of the video. To select the most representative frames from the original video sequence, we formulate the problem of gastroscopic video summarization as a dictionary selection issue. Different from the traditional dictionary selection methods, which take into account only the number and reconstruction ability of selected key frames, our model introduces the similar-inhibition constraint to reinforce the diversity of selected key frames. We calculate the attention cost by merging both gaze and content change into a prior cue to help select the frames with more high-level semantic information. Moreover, we adopt an image quality evaluation process to eliminate the interference of the poor quality images and a segmentation process to reduce the computational complexity. For experiments, we build a new gastroscopic video dataset captured from 30 volunteers with more than 400k images and compare our method with the state-of-the-arts using the content consistency, index consistency and content-index consistency with the ground truth. Compared with all competitors, our method obtains the best results in 23 of 30 videos evaluated based on content consistency, 24 of 30 videos evaluated based on index consistency and all videos evaluated based on content-index consistency. For gastroscopic video summarization, we propose an automated annotation method via similar-inhibition dictionary selection. Our model can achieve better performance compared with other state-of-the-art models and supplies more suitable key frames for diagnosis. The developed algorithm can be automatically adapted to various real applications, such as the training of young clinicians, computer-aided diagnosis or medical report generation. Copyright © 2015 Elsevier B.V. All rights reserved.
Estimating Physical Activity Energy Expenditure with the Kinect Sensor in an Exergaming Environment
Nathan, David; Huynh, Du Q.; Rubenson, Jonas; Rosenberg, Michael
2015-01-01
Active video games that require physical exertion during game play have been shown to confer health benefits. Typically, energy expended during game play is measured using devices attached to players, such as accelerometers, or portable gas analyzers. Since 2010, active video gaming technology incorporates marker-less motion capture devices to simulate human movement into game play. Using the Kinect Sensor and Microsoft SDK this research aimed to estimate the mechanical work performed by the human body and estimate subsequent metabolic energy using predictive algorithmic models. Nineteen University students participated in a repeated measures experiment performing four fundamental movements (arm swings, standing jumps, body-weight squats, and jumping jacks). Metabolic energy was captured using a Cortex Metamax 3B automated gas analysis system with mechanical movement captured by the combined motion data from two Kinect cameras. Estimations of the body segment properties, such as segment mass, length, centre of mass position, and radius of gyration, were calculated from the Zatsiorsky-Seluyanov's equations of de Leva, with adjustment made for posture cost. GPML toolbox implementation of the Gaussian Process Regression, a locally weighted k-Nearest Neighbour Regression, and a linear regression technique were evaluated for their performance on predicting the metabolic cost from new feature vectors. The experimental results show that Gaussian Process Regression outperformed the other two techniques by a small margin. This study demonstrated that physical activity energy expenditure during exercise, using the Kinect camera as a motion capture system, can be estimated from segmental mechanical work. Estimates for high-energy activities, such as standing jumps and jumping jacks, can be made accurately, but for low-energy activities, such as squatting, the posture of static poses should be considered as a contributing factor. When translated into the active video gaming environment, the results could be incorporated into game play to more accurately control the energy expenditure requirements. PMID:26000460
Violent video game players and non-players differ on facial emotion recognition.
Diaz, Ruth L; Wong, Ulric; Hodgins, David C; Chiu, Carina G; Goghari, Vina M
2016-01-01
Violent video game playing has been associated with both positive and negative effects on cognition. We examined whether playing two or more hours of violent video games a day, compared to not playing video games, was associated with a different pattern of recognition of five facial emotions, while controlling for general perceptual and cognitive differences that might also occur. Undergraduate students were categorized as violent video game players (n = 83) or non-gamers (n = 69) and completed a facial recognition task, consisting of an emotion recognition condition and a control condition of gender recognition. Additionally, participants completed questionnaires assessing their video game and media consumption, aggression, and mood. Violent video game players recognized fearful faces both more accurately and quickly and disgusted faces less accurately than non-gamers. Desensitization to violence, constant exposure to fear and anxiety during game playing, and the habituation to unpleasant stimuli, are possible mechanisms that could explain these results. Future research should evaluate the effects of violent video game playing on emotion processing and social cognition more broadly. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Liu, Iching; Sun, Ying
1992-10-01
A system for reconstructing 3-D vascular structure from two orthogonally projected images is presented. The formidable problem of matching segments between two views is solved using knowledge of the epipolar constraint and the similarity of segment geometry and connectivity. The knowledge is represented in a rule-based system, which also controls the operation of several computational algorithms for tracking segments in each image, representing 2-D segments with directed graphs, and reconstructing 3-D segments from matching 2-D segment pairs. Uncertain reasoning governs the interaction between segmentation and matching; it also provides a framework for resolving the matching ambiguities in an iterative way. The system was implemented in the C language and the C Language Integrated Production System (CLIPS) expert system shell. Using video images of a tree model, the standard deviation of reconstructed centerlines was estimated to be 0.8 mm (1.7 mm) when the view direction was parallel (perpendicular) to the epipolar plane. Feasibility of clinical use was shown using x-ray angiograms of a human chest phantom. The correspondence of vessel segments between two views was accurate. Computational time for the entire reconstruction process was under 30 s on a workstation. A fully automated system for two-view reconstruction that does not require the a priori knowledge of vascular anatomy is demonstrated.
ERIC Educational Resources Information Center
Zemsky, Robert; Shaman, Susan; Shapiro, Daniel B.
2001-01-01
Presents a market taxonomy for higher education, including what it reveals about the structure of the market, the model's technical attributes, and its capacity to explain pricing behavior. Details the identification of the principle seams separating one market segment from another and how student aspirations help to organize the market, making…
Multiple Vehicle Detection and Segmentation in Malaysia Traffic Flow
NASA Astrophysics Data System (ADS)
Fariz Hasan, Ahmad; Fikri Che Husin, Mohd; Affendi Rosli, Khairul; Norhafiz Hashim, Mohd; Faiz Zainal Abidin, Amar
2018-03-01
Vision based system are widely used in the field of Intelligent Transportation System (ITS) to extract a large amount of information to analyze traffic scenes. By rapid number of vehicles on the road as well as significant increase on cameras dictated the need for traffic surveillance systems. This system can take over the burden some task was performed by human operator in traffic monitoring centre. The main technique proposed by this paper is concentrated on developing a multiple vehicle detection and segmentation focusing on monitoring through Closed Circuit Television (CCTV) video. The system is able to automatically segment vehicle extracted from heavy traffic scene by optical flow estimation alongside with blob analysis technique in order to detect the moving vehicle. Prior to segmentation, blob analysis technique will compute the area of interest region corresponding to moving vehicle which will be used to create bounding box on that particular vehicle. Experimental validation on the proposed system was performed and the algorithm is demonstrated on various set of traffic scene.
A procedure for testing prospective remembering in persons with neurological impairments.
Titov, N; Knight, R G
2000-10-01
A video-based procedure for assessing prospective remembering (PR) in brain-injured clients is described. In this task, a list of instructions is given, each comprising an action (buy a hamburger) and a cue (at McDonalds), which are to be recalled while watching a videotape segment showing the view of a person walking through a shopping area. A group of 12 clients with varying degrees of memory impairment undergoing rehabilitation completed both a video test and a comparable task in real-life. Significant correlations were found between the two measures, indicating that a video-based analogue can be used to estimate prospective remembering in real life. Scores on the PR task were associated with accuracy of recall on a word-list task, but not with the Working Memory Index of the Wechsler Memory Scale-III, suggesting that the task is sensitive to levels of amnesic deficit.
Markerless video analysis for movement quantification in pediatric epilepsy monitoring.
Lu, Haiping; Eng, How-Lung; Mandal, Bappaditya; Chan, Derrick W S; Ng, Yen-Ling
2011-01-01
This paper proposes a markerless video analytic system for quantifying body part movements in pediatric epilepsy monitoring. The system utilizes colored pajamas worn by a patient in bed to extract body part movement trajectories, from which various features can be obtained for seizure detection and analysis. Hence, it is non-intrusive and it requires no sensor/marker to be attached to the patient's body. It takes raw video sequences as input and a simple user-initialization indicates the body parts to be examined. In background/foreground modeling, Gaussian mixture models are employed in conjunction with HSV-based modeling. Body part detection follows a coarse-to-fine paradigm with graph-cut-based segmentation. Finally, body part parameters are estimated with domain knowledge guidance. Experimental studies are reported on sequences captured in an Epilepsy Monitoring Unit at a local hospital. The results demonstrate the feasibility of the proposed system in pediatric epilepsy monitoring and seizure detection.
Event completion: event based inferences distort memory in a matter of seconds.
Strickland, Brent; Keil, Frank
2011-12-01
We present novel evidence that implicit causal inferences distort memory for events only seconds after viewing. Adults watched videos of someone launching (or throwing) an object. However, the videos omitted the moment of contact (or release). Subjects falsely reported seeing the moment of contact when it was implied by subsequent footage but did not do so when the contact was not implied. Causal implications were disrupted either by replacing the resulting flight of the ball with irrelevant video or by scrambling event segments. Subjects in the different causal implication conditions did not differ on false alarms for other moments of the event, nor did they differ in general recognition accuracy. These results suggest that as people perceive events, they generate rapid conceptual interpretations that can have a powerful effect on how events are remembered. Copyright © 2011 Elsevier B.V. All rights reserved.
Schou Andreassen, Cecilie; Billieux, Joël; Griffiths, Mark D; Kuss, Daria J; Demetrovics, Zsolt; Mazzoni, Elvis; Pallesen, Ståle
2016-03-01
Over the last decade, research into "addictive technological behaviors" has substantially increased. Research has also demonstrated strong associations between addictive use of technology and comorbid psychiatric disorders. In the present study, 23,533 adults (mean age 35.8 years, ranging from 16 to 88 years) participated in an online cross-sectional survey examining whether demographic variables, symptoms of attention-deficit/hyperactivity disorder (ADHD), obsessive-compulsive disorder (OCD), anxiety, and depression could explain variance in addictive use (i.e., compulsive and excessive use associated with negative outcomes) of two types of modern online technologies: social media and video games. Correlations between symptoms of addictive technology use and mental disorder symptoms were all positive and significant, including the weak interrelationship between the two addictive technological behaviors. Age appeared to be inversely related to the addictive use of these technologies. Being male was significantly associated with addictive use of video games, whereas being female was significantly associated with addictive use of social media. Being single was positively related to both addictive social networking and video gaming. Hierarchical regression analyses showed that demographic factors explained between 11 and 12% of the variance in addictive technology use. The mental health variables explained between 7 and 15% of the variance. The study significantly adds to our understanding of mental health symptoms and their role in addictive use of modern technology, and suggests that the concept of Internet use disorder (i.e., "Internet addiction") as a unified construct is not warranted. (c) 2016 APA, all rights reserved).
ERIC Educational Resources Information Center
Baumann, Chris; Hamin
2011-01-01
A nation's culture, competitiveness and economic performance explain academic performance. Partial Least Squares (PLS) testing of 2252 students shows culture affects competitiveness and academic performance. Culture and economic performance each explain 32%; competitiveness 36%. The model predicts academic performance when culture, competitiveness…
ERIC Educational Resources Information Center
Leopold, Marjorie
This program is a self-guided professional development experience that explains how to use multiple intelligences (MI) theory to improve teaching, learning, and achievement in middle and high school classrooms. The program consists of one manual and six VHS videos, each of which corresponds to one of the six modules listed in the table of…
ERIC Educational Resources Information Center
Huesmann, L. Rowell
2010-01-01
Over the past half century the mass media, including video games, have become important socializers of children. Observational learning theory has evolved into social-cognitive information processing models that explain that what a child observes in any venue has both short-term and long-term influences on the child's behaviors and cognitions. C.…
Chan, Linda; Mackintosh, Jeannie; Dobbins, Maureen
2017-09-28
The National Collaborating Centre for Methods and Tools (NCCMT) offers workshops and webinars to build public health capacity for evidence-informed decision-making. Despite positive feedback for NCCMT workshops and resources, NCCMT users found key terms used in research papers difficult to understand. The Understanding Research Evidence (URE) videos use plain language, cartoon visuals, and public health examples to explain complex research concepts. The videos are posted on the NCCMT website and YouTube channel. The first four videos in the URE web-based video series, which explained odds ratios (ORs), confidence intervals (CIs), clinical significance, and forest plots, were evaluated. The evaluation examined how the videos affected public health professionals' practice. A mixed-methods approach was used to examine the delivery mode and the content of the videos. Specifically, the evaluation explored (1) whether the videos were effective at increasing knowledge on the four video topics, (2) whether public health professionals were satisfied with the videos, and (3) how public health professionals applied the knowledge gained from the videos in their work. A three-part evaluation was conducted to determine the effectiveness of the first four URE videos. The evaluation included a Web-based survey, telephone interviews, and pretest and posttests, which evaluated public health professionals' experience with the videos and how the videos affected their public health work. Participants were invited to participate in this evaluation through various open access, public health email lists, through informational flyers and posters at the Canadian Public Health Association (CPHA) conference, and through targeted recruitment to NCCMT's network. In the Web-based surveys (n=46), participants achieved higher scores on the knowledge assessment questions from watching the OR (P=.04), CI (P=.04), and clinical significance (P=.05) videos but not the forest plot (P=.12) video, as compared with participants who had not watched the videos. The pretest and posttest (n=124) demonstrated that participants had a better understanding of forest plots (P<.001) and CIs (P<.001) after watching the videos. Due to small sample size numbers, there were insufficient pretest and posttest data to conduct meaningful analyses on the clinical significance and OR videos. Telephone interview participants (n=18) thought the videos' use of animation, narration, and plain language was appropriate for people with different levels of understanding and learning styles. Participants felt that by increasing their understanding of research evidence, they could develop better interventions and design evaluations to measure the impact of public health initiatives. Overall, the results of the evaluation showed that watching the videos resulted in an increase in knowledge, and participants had an overall positive experience with the URE videos. With increased competence in using the best available evidence, professionals are empowered to contribute to decisions that can improve health outcomes of communities. ©Linda Chan, Jeannie Mackintosh, Maureen Dobbins. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 28.09.2017.
Media Research with a Galvanic Skin Response Biosensor: Some Kids Work Up a Sweat!
ERIC Educational Resources Information Center
Clariana, Roy B.
This study considers the galvanic skin response (GSR) of sixth-grade students (n=20) using print, video, and microcomputer segments. Subjects received all three media treatments, in randomized order. Data for analysis consisted of standardized test scores and GSR measures; a moderate positive relationship was shown between cumulative GSR and…
ERIC Educational Resources Information Center
De La Paz, Susan; Hernandez-Ramos, Pedro; Barron, Linda
2004-01-01
A multimedia CD-ROM program, Mathematics Teaching and Learning in Inclusive Classrooms, was produced to help preservice teachers learn mathematics teaching methods in the context of inclusive classrooms. The contents include text resources, video segments of experts and of classroom lessons, images of student work, an electronic notebook, and a…
Lights, Camera, AG-Tion: Promoting Agricultural and Environmental Education on Camera
ERIC Educational Resources Information Center
Fuhrman, Nicholas E.
2016-01-01
Viewing of online videos and television segments has become a popular and efficient way for Extension audiences to acquire information. This article describes a unique approach to teaching on camera that may help Extension educators communicate their messages with comfort and personality. The S.A.L.A.D. approach emphasizes using relevant teaching…
Art of the Pacific Islands. [CD-ROM].
ERIC Educational Resources Information Center
Pacific Resources for Education and Learning, Honolulu, HI.
Oceanic art has long been recognized for its quality and its influence on Western art. This CD-ROM presents over 100 of the finest examples of art from the Pacific region in the form of museum photos, contemporary video segments, and music. The CD-ROM includes such artifacts as masks and carvings from Melanesia, canoes and storyboards from…
ERIC Educational Resources Information Center
Duffy, Thomas; And Others
This supplementary volume presents appendixes A-E associated with a 1-year study which determined what secondary school students were doing as they engaged in the Chelsea Bank computer software simulation activities. Appendixes present the SCANS Analysis Coding Sheet; coding problem analysis of 50 video segments; student and teacher interview…
Neural dynamics of grouping and segmentation explain properties of visual crowding.
Francis, Gregory; Manassi, Mauro; Herzog, Michael H
2017-07-01
Investigations of visual crowding, where a target is difficult to identify because of flanking elements, has largely used a theoretical perspective based on local interactions where flanking elements pool with or substitute for properties of the target. This successful theoretical approach has motivated a wide variety of empirical investigations to identify mechanisms that cause crowding, and it has suggested practical applications to mitigate crowding effects. However, this theoretical approach has been unable to account for a parallel set of findings that crowding is influenced by long-range perceptual grouping effects. When the target and flankers are perceived as part of separate visual groups, crowding tends to be quite weak. Here, we describe how theoretical mechanisms for grouping and segmentation in cortical neural circuits can account for a wide variety of these long-range grouping effects. Building on previous work, we explain how crowding occurs in the model and explain how grouping in the model involves connected boundary signals that represent a key aspect of visual information. We then introduce new circuits that allow nonspecific top-down selection signals to flow along connected boundaries or within a surface contained by boundaries and thereby induce a segmentation that can separate the visual information corresponding to the flankers from the visual information corresponding to the target. When such segmentation occurs, crowding is shown to be weak. We compare the model's behavior to 5 sets of experimental findings on visual crowding and show that the model does a good job explaining the key empirical findings. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
A new visual navigation system for exploring biomedical Open Educational Resource (OER) videos.
Zhao, Baoquan; Xu, Songhua; Lin, Shujin; Luo, Xiaonan; Duan, Lian
2016-04-01
Biomedical videos as open educational resources (OERs) are increasingly proliferating on the Internet. Unfortunately, seeking personally valuable content from among the vast corpus of quality yet diverse OER videos is nontrivial due to limitations of today's keyword- and content-based video retrieval techniques. To address this need, this study introduces a novel visual navigation system that facilitates users' information seeking from biomedical OER videos in mass quantity by interactively offering visual and textual navigational clues that are both semantically revealing and user-friendly. The authors collected and processed around 25 000 YouTube videos, which collectively last for a total length of about 4000 h, in the broad field of biomedical sciences for our experiment. For each video, its semantic clues are first extracted automatically through computationally analyzing audio and visual signals, as well as text either accompanying or embedded in the video. These extracted clues are subsequently stored in a metadata database and indexed by a high-performance text search engine. During the online retrieval stage, the system renders video search results as dynamic web pages using a JavaScript library that allows users to interactively and intuitively explore video content both efficiently and effectively.ResultsThe authors produced a prototype implementation of the proposed system, which is publicly accessible athttps://patentq.njit.edu/oer To examine the overall advantage of the proposed system for exploring biomedical OER videos, the authors further conducted a user study of a modest scale. The study results encouragingly demonstrate the functional effectiveness and user-friendliness of the new system for facilitating information seeking from and content exploration among massive biomedical OER videos. Using the proposed tool, users can efficiently and effectively find videos of interest, precisely locate video segments delivering personally valuable information, as well as intuitively and conveniently preview essential content of a single or a collection of videos. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Collaborative real-time motion video analysis by human observer and image exploitation algorithms
NASA Astrophysics Data System (ADS)
Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen
2015-05-01
Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.
Study of moving object detecting and tracking algorithm for video surveillance system
NASA Astrophysics Data System (ADS)
Wang, Tao; Zhang, Rongfu
2010-10-01
This paper describes a specific process of moving target detecting and tracking in the video surveillance.Obtain high-quality background is the key to achieving differential target detecting in the video surveillance.The paper is based on a block segmentation method to build clear background,and using the method of background difference to detecing moving target,after a series of treatment we can be extracted the more comprehensive object from original image,then using the smallest bounding rectangle to locate the object.In the video surveillance system, the delay of camera and other reasons lead to tracking lag,the model of Kalman filter based on template matching was proposed,using deduced and estimated capacity of Kalman,the center of smallest bounding rectangle for predictive value,predicted the position in the next moment may appare,followed by template matching in the region as the center of this position,by calculate the cross-correlation similarity of current image and reference image,can determine the best matching center.As narrowed the scope of searching,thereby reduced the searching time,so there be achieve fast-tracking.
Vehicle counting system using real-time video processing
NASA Astrophysics Data System (ADS)
Crisóstomo-Romero, Pedro M.
2006-02-01
Transit studies are important for planning a road network with optimal vehicular flow. A vehicular count is essential. This article presents a vehicle counting system based on video processing. An advantage of such system is the greater detail than is possible to obtain, like shape, size and speed of vehicles. The system uses a video camera placed above the street to image transit in real-time. The video camera must be placed at least 6 meters above the street level to achieve proper acquisition quality. Fast image processing algorithms and small image dimensions are used to allow real-time processing. Digital filters, mathematical morphology, segmentation and other techniques allow identifying and counting all vehicles in the image sequences. The system was implemented under Linux in a 1.8 GHz Pentium 4 computer. A successful count was obtained with frame rates of 15 frames per second for images of size 240x180 pixels and 24 frames per second for images of size 180x120 pixels, thus being able to count vehicles whose speeds do not exceed 150 km/h.
Audio-video feature correlation: faces and speech
NASA Astrophysics Data System (ADS)
Durand, Gwenael; Montacie, Claude; Caraty, Marie-Jose; Faudemay, Pascal
1999-08-01
This paper presents a study of the correlation of features automatically extracted from the audio stream and the video stream of audiovisual documents. In particular, we were interested in finding out whether speech analysis tools could be combined with face detection methods, and to what extend they should be combined. A generic audio signal partitioning algorithm as first used to detect Silence/Noise/Music/Speech segments in a full length movie. A generic object detection method was applied to the keyframes extracted from the movie in order to detect the presence or absence of faces. The correlation between the presence of a face in the keyframes and of the corresponding voice in the audio stream was studied. A third stream, which is the script of the movie, is warped on the speech channel in order to automatically label faces appearing in the keyframes with the name of the corresponding character. We naturally found that extracted audio and video features were related in many cases, and that significant benefits can be obtained from the joint use of audio and video analysis methods.
A unified and efficient framework for court-net sports video analysis using 3D camera modeling
NASA Astrophysics Data System (ADS)
Han, Jungong; de With, Peter H. N.
2007-01-01
The extensive amount of video data stored on available media (hard and optical disks) necessitates video content analysis, which is a cornerstone for different user-friendly applications, such as, smart video retrieval and intelligent video summarization. This paper aims at finding a unified and efficient framework for court-net sports video analysis. We concentrate on techniques that are generally applicable for more than one sports type to come to a unified approach. To this end, our framework employs the concept of multi-level analysis, where a novel 3-D camera modeling is utilized to bridge the gap between the object-level and the scene-level analysis. The new 3-D camera modeling is based on collecting features points from two planes, which are perpendicular to each other, so that a true 3-D reference is obtained. Another important contribution is a new tracking algorithm for the objects (i.e. players). The algorithm can track up to four players simultaneously. The complete system contributes to summarization by various forms of information, of which the most important are the moving trajectory and real-speed of each player, as well as 3-D height information of objects and the semantic event segments in a game. We illustrate the performance of the proposed system by evaluating it for a variety of court-net sports videos containing badminton, tennis and volleyball, and we show that the feature detection performance is above 92% and events detection about 90%.
Bavelier, Daphne; Green, C Shawn; Han, Doug Hyun; Renshaw, Perry F; Merzenich, Michael M; Gentile, Douglas A
2011-11-18
The popular press is replete with stories about the effects of video and computer games on the brain. Sensationalist headlines claiming that video games 'damage the brain' or 'boost brain power' do not do justice to the complexities and limitations of the studies involved, and create a confusing overall picture about the effects of gaming on the brain. Here, six experts in the field shed light on our current understanding of the positive and negative ways in which playing video games can affect cognition and behaviour, and explain how this knowledge can be harnessed for educational and rehabilitation purposes. As research in this area is still in its early days, the contributors of this Viewpoint also discuss several issues and challenges that should be addressed to move the field forward.
Discharge Planning and Teaching.
Sexson, Kathryn; Lindauer, Allison; Harvath, Theresa A
2017-05-01
This article is the fifth in a series, Supporting Family Caregivers: No Longer Home Alone, published in collaboration with the AARP Public Policy Institute. Results of focus groups, conducted as part of the AARP Public Policy Institute's No Longer Home Alone video project, supported evidence that family caregivers aren't being given the information they need to manage the complex care regimens of their family members. This series of articles and accompanying videos aims to help nurses provide caregivers with the tools they need to manage their family member's medications. Each article explains the principles nurses should consider and reinforce with caregivers and is accompanied by a video for the caregiver to watch. The fifth video can be accessed at http://links.lww.com/AJN/A79.
Discharge Planning and Teaching.
Sexson, Kathryn; Lindauer, Allison; Harvath, Theresa A
2017-05-01
: This article is the fifth in a series, Supporting Family Caregivers: No Longer Home Alone, published in collaboration with the AARP Public Policy Institute. Results of focus groups conducted as part of the AARP Public Policy Institute's No Longer Home Alone video project supported evidence that family caregivers aren't being given the information they need to manage the complex care regimens of their family members. This series of articles and accompanying videos aims to help nurses provide caregivers with the tools they need to manage their family member's medications. Each article explains the principles nurses should consider and reinforce with caregivers and is accompanied by a video for the caregiver to watch. The fifth video can be accessed at http://links.lww.com/AJN/A79.
Administration of Subcutaneous Injections.
Sexson, Kathryn; Lindauer, Allison; Harvath, Theresa A
2017-05-01
: This article is the second in a series, Supporting Family Caregivers: No Longer Home Alone, published in collaboration with the AARP Public Policy Institute. Results of focus groups conducted as part of the AARP Public Policy Institute's No Longer Home Alone video project supported evidence that family caregivers aren't being given the information they need to manage the complex care regimens of their family members. This series of articles and accompanying videos aims to help nurses provide caregivers with the tools they need to manage their family member's medications. Each article explains the principles nurses should consider and reinforce with caregivers and is accompanied by a video for the caregiver to watch. The second video can be accessed at http://links.lww.com/AJN/A75.
Managing Complex Medication Regimens.
Harvath, Theresa A; Lindauer, Allison; Sexson, Kathryn
2017-05-01
: This article is the first in a series, Supporting Family Caregivers: No Longer Home Alone, published in collaboration with the AARP Public Policy Institute. Results of focus groups conducted as part of the AARP Public Policy Institute's No Longer Home Alone video project supported evidence that family caregivers aren't being given the information they need to manage the complex care regimens of their family members. This series of articles and accompanying videos aims to help nurses provide caregivers with the tools they need to manage their family member's medications. Each article explains the principles nurses should consider and reinforce with caregivers and is accompanied by a video for the caregiver to watch. The first video can be accessed at http://links.lww.com/AJN/A74.
Administration of Subcutaneous Injections.
Sexson, Kathryn; Lindauer, Allison; Harvath, Theresa A
2016-12-01
This article is the second in a series, Supporting Family Caregivers: No Longer Home Alone, published in collaboration with the AARP Public Policy Institute. Results of focus groups conducted as part of the AARP Public Policy Institute's No Longer Home Alone video project supported evidence that family caregivers aren't being given the information they need to manage the complex care regimens of their family members. This series of articles and accompanying videos aims to help nurses provide caregivers with the tools they need to manage their family member's medications. Each article explains the principles nurses should consider and reinforce with caregivers and is accompanied by a video for the caregiver to watch. The second video can be accessed at http://links.lww.com/AJN/A75.
Medication Management for People with Dementia.
Lindauer, Allison; Sexson, Kathryn; Harvath, Theresa A
2017-05-01
: This article is the fourth in a series, Supporting Family Caregivers: No Longer Home Alone, published in collaboration with the AARP Public Policy Institute. Results of focus groups conducted as part of the AARP Public Policy Institute's No Longer Home Alone video project supported evidence that family caregivers aren't being given the information they need to manage the complex care regimens of their family members. This series of articles and accompanying videos aims to help nurses provide caregivers with the tools they need to manage their family member's medications. Each article explains the principles nurses should consider and reinforce with caregivers and is accompanied by a video for the caregiver to watch. The fourth video can be accessed at http://links.lww.com/AJN/A78.
Medication Management for People with Dementia.
Lindauer, Allison; Sexson, Kathryn; Harvath, Theresa A
2017-02-01
This article is the fourth in a series, Supporting Family Caregivers: No Longer Home Alone, published in collaboration with the AARP Public Policy Institute. Results of focus groups conducted as part of the AARP Public Policy Institute's No Longer Home Alone video project supported evidence that family caregivers aren't being given the information they need to manage the complex care regimens of their family members. This series of articles and accompanying videos aims to help nurses provide caregivers with the tools they need to manage their family member's medications. Each article explains the principles nurses should consider and reinforce with caregivers and is accompanied by a video for the caregiver to watch. The fourth video can be accessed at http://links.lww.com/AJN/A78.
Knowledge-based understanding of aerial surveillance video
NASA Astrophysics Data System (ADS)
Cheng, Hui; Butler, Darren
2006-05-01
Aerial surveillance has long been used by the military to locate, monitor and track the enemy. Recently, its scope has expanded to include law enforcement activities, disaster management and commercial applications. With the ever-growing amount of aerial surveillance video acquired daily, there is an urgent need for extracting actionable intelligence in a timely manner. Furthermore, to support high-level video understanding, this analysis needs to go beyond current approaches and consider the relationships, motivations and intentions of the objects in the scene. In this paper we propose a system for interpreting aerial surveillance videos that automatically generates a succinct but meaningful description of the observed regions, objects and events. For a given video, the semantics of important regions and objects, and the relationships between them, are summarised into a semantic concept graph. From this, a textual description is derived that provides new search and indexing options for aerial video and enables the fusion of aerial video with other information modalities, such as human intelligence, reports and signal intelligence. Using a Mixture-of-Experts video segmentation algorithm an aerial video is first decomposed into regions and objects with predefined semantic meanings. The objects are then tracked and coerced into a semantic concept graph and the graph is summarized spatially, temporally and semantically using ontology guided sub-graph matching and re-writing. The system exploits domain specific knowledge and uses a reasoning engine to verify and correct the classes, identities and semantic relationships between the objects. This approach is advantageous because misclassifications lead to knowledge contradictions and hence they can be easily detected and intelligently corrected. In addition, the graph representation highlights events and anomalies that a low-level analysis would overlook.
Performance Evaluation of the NASA/KSC Transmission System
NASA Technical Reports Server (NTRS)
Christensen, Kenneth J.
2000-01-01
NASA-KSC currently uses three bridged 100-Mbps FDDI segments as its backbone for data traffic. The FDDI Transmission System (FTXS) connects the KSC industrial area, KSC launch complex 39 area, and the Cape Canaveral Air Force Station. The report presents a performance modeling study of the FTXS and the proposed ATM Transmission System (ATXS). The focus of the study is on performance of MPEG video transmission on these networks. Commercial modeling tools - the CACI Predictor and Comnet tools - were used. In addition, custom software tools were developed to characterize conversation pairs in Sniffer trace (capture) files to use as input to these tools. A baseline study of both non-launch and launch day data traffic on the FTXS is presented. MPEG-1 and MPEG-2 video traffic was characterized and the shaping of it evaluated. It is shown that the characteristics of a video stream has a direct effect on its performance in a network. It is also shown that shaping of video streams is necessary to prevent overflow losses and resulting poor video quality. The developed models can be used to predict when the existing FTXS will 'run out of room' and for optimizing the parameters of ATM links used for transmission of MPEG video. Future work with these models can provide useful input and validation to set-top box projects within the Advanced Networks Development group in NASA-KSC Development Engineering.
An adaptive enhancement algorithm for infrared video based on modified k-means clustering
NASA Astrophysics Data System (ADS)
Zhang, Linze; Wang, Jingqi; Wu, Wen
2016-09-01
In this paper, we have proposed a video enhancement algorithm to improve the output video of the infrared camera. Sometimes the video obtained by infrared camera is very dark since there is no clear target. In this case, infrared video should be divided into frame images by frame extraction, in order to carry out the image enhancement. For the first frame image, which can be divided into k sub images by using K-means clustering according to the gray interval it occupies before k sub images' histogram equalization according to the amount of information per sub image, we used a method to solve a problem that final cluster centers close to each other in some cases; and for the other frame images, their initial cluster centers can be determined by the final clustering centers of the previous ones, and the histogram equalization of each sub image will be carried out after image segmentation based on K-means clustering. The histogram equalization can make the gray value of the image to the whole gray level, and the gray level of each sub image is determined by the ratio of pixels to a frame image. Experimental results show that this algorithm can improve the contrast of infrared video where night target is not obvious which lead to a dim scene, and reduce the negative effect given by the overexposed pixels adaptively in a certain range.
Segmented Polynomial Models in Quasi-Experimental Research.
ERIC Educational Resources Information Center
Wasik, John L.
1981-01-01
The use of segmented polynomial models is explained. Examples of design matrices of dummy variables are given for the least squares analyses of time series and discontinuity quasi-experimental research designs. Linear combinations of dummy variable vectors appear to provide tests of effects in the two quasi-experimental designs. (Author/BW)
NASA Astrophysics Data System (ADS)
Maurer, Calvin R., Jr.; Sauer, Frank; Hu, Bo; Bascle, Benedicte; Geiger, Bernhard; Wenzel, Fabian; Recchi, Filippo; Rohlfing, Torsten; Brown, Christopher R.; Bakos, Robert J.; Maciunas, Robert J.; Bani-Hashemi, Ali R.
2001-05-01
We are developing a video see-through head-mounted display (HMD) augmented reality (AR) system for image-guided neurosurgical planning and navigation. The surgeon wears a HMD that presents him with the augmented stereo view. The HMD is custom fitted with two miniature color video cameras that capture a stereo view of the real-world scene. We are concentrating specifically at this point on cranial neurosurgery, so the images will be of the patient's head. A third video camera, operating in the near infrared, is also attached to the HMD and is used for head tracking. The pose (i.e., position and orientation) of the HMD is used to determine where to overlay anatomic structures segmented from preoperative tomographic images (e.g., CT, MR) on the intraoperative video images. Two SGI 540 Visual Workstation computers process the three video streams and render the augmented stereo views for display on the HMD. The AR system operates in real time at 30 frames/sec with a temporal latency of about three frames (100 ms) and zero relative lag between the virtual objects and the real-world scene. For an initial evaluation of the system, we created AR images using a head phantom with actual internal anatomic structures (segmented from CT and MR scans of a patient) realistically positioned inside the phantom. When using shaded renderings, many users had difficulty appreciating overlaid brain structures as being inside the head. When using wire frames, and texture-mapped dot patterns, most users correctly visualized brain anatomy as being internal and could generally appreciate spatial relationships among various objects. The 3D perception of these structures is based on both stereoscopic depth cues and kinetic depth cues, with the user looking at the head phantom from varying positions. The perception of the augmented visualization is natural and convincing. The brain structures appear rigidly anchored in the head, manifesting little or no apparent swimming or jitter. The initial evaluation of the system is encouraging, and we believe that AR visualization might become an important tool for image-guided neurosurgical planning and navigation.
Automatic movie skimming with general tempo analysis
NASA Astrophysics Data System (ADS)
Lee, Shih-Hung; Yeh, Chia-Hung; Kuo, C. C. J.
2003-11-01
Story units are extracted by general tempo analysis including tempos analysis including tempos of audio and visual information in this research. Although many schemes have been proposed to successfully segment video data into shots using basic low-level features, how to group shots into meaningful units called story units is still a challenging problem. By focusing on a certain type of video such as sport or news, we can explore models with the specific application domain knowledge. For movie contents, many heuristic rules based on audiovisual clues have been proposed with limited success. We propose a method to extract story units using general tempo analysis. Experimental results are given to demonstrate the feasibility and efficiency of the proposed technique.
NASA Astrophysics Data System (ADS)
Maragos, Petros
The topics discussed at the conference include hierarchical image coding, motion analysis, feature extraction and image restoration, video coding, and morphological and related nonlinear filtering. Attention is also given to vector quantization, morphological image processing, fractals and wavelets, architectures for image and video processing, image segmentation, biomedical image processing, and model-based analysis. Papers are presented on affine models for motion and shape recovery, filters for directly detecting surface orientation in an image, tracking of unresolved targets in infrared imagery using a projection-based method, adaptive-neighborhood image processing, and regularized multichannel restoration of color images using cross-validation. (For individual items see A93-20945 to A93-20951)
Feedback from video for virtual reality Navigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsap, L V
2000-10-27
Important preconditions for wide acceptance of virtual reality (VR) systems include their comfort, ease and naturalness to use. Most existing trackers super from discomfort-related issues. For example, body-based trackers (hand controllers, joysticks, helmet attachments, etc.) restrict spontaneity and naturalness of motion, while ground-based devices (e.g., hand controllers) limit the workspace by literally binding an operator to the ground. There are similar problems with controls. This paper describes using real-time video with registered depth information (from a commercially available camera) for virtual reality navigation. Camera-based setup can replace cumbersome trackers. The method includes selective depth processing for increased speed, and amore » robust skin-color segmentation for accounting illumination variations.« less
Exploring the dark energy biosphere, 15 seconds at a time
NASA Astrophysics Data System (ADS)
Petrone, C.; Tossey, L.; Biddle, J.
2016-12-01
Science communication often suffers from numerous pitfalls including jargon, complexity, ageneral lack of (science) education of the audience, and short attention spans. With the Center for Dark EnergyBiosphere Investigations (C-DEBI), Delaware Sea Grant is expanding its collection of 15 Second Science videos, whichdeliver complex science topics, with visually stimulating footage and succinct audio. Featuring a diverse cast of scientistsand educators in front of the camera, we are expanded our reach into the public and classrooms. We're alsoexperimenting with smartphone-based virtual reality, for a more immersive experience into the deep! We will show youthe process for planning, producing, and posting our #15secondscience videos and VR segments, and how we areevaluating effectiveness.
Lehmann, Ronny; Seitz, Anke; Bosse, Hans Martin; Lutz, Thomas; Huwendiek, Sören
2016-11-01
Physical examination skills are crucial for a medical doctor. The physical examination of children differs significantly from that of adults. Students often have only limited contact with pediatric patients to practice these skills. In order to improve the acquisition of pediatric physical examination skills during bedside teaching, we have developed a combined video-based training concept, subsequently evaluating its use and perception. Fifteen videos were compiled, demonstrating defined physical examination sequences in children of different ages. Students were encouraged to use these videos as preparation for bedside teaching during their pediatric clerkship. After bedside teaching, acceptance of this approach was evaluated using a 10-item survey, asking for the frequency of video use and the benefits to learning, self-confidence, and preparation of bedside teaching as well as the concluding OSCE. N=175 out of 299 students returned survey forms (58.5%). Students most frequently used videos, either illustrating complete examination sequences or corresponding focus examinations frequently assessed in the OSCE. Students perceived the videos as a helpful method of conveying the practical process and preparation for bedside teaching as well as the OSCE, and altogether considered them a worthwhile learning experience. Self-confidence at bedside teaching was enhanced by preparation with the videos. The demonstration of a defined standardized procedural sequence, explanatory comments, and demonstration of infrequent procedures and findings were perceived as particularly supportive. Long video segments, poor alignment with other curricular learning activities, and technical problems were perceived as less helpful. Students prefer an optional individual use of the videos, with easy technical access, thoughtful combination with the bedside teaching, and consecutive standardized practice of demonstrated procedures. Preparation with instructional videos combined with bedside teaching, were perceived to improve the acquisition of pediatric physical examination skills. Copyright © 2016 Elsevier GmbH. All rights reserved.
Preparation à la retraite - Preparing for retirement
None
2018-05-14
Retirement implies an important change from a working environment to a new lifestyle. Every individual copes with this transition in his own way. In this video, registered already a few years ago, Dr. Sartorius from WHO addresses some of his colleagues close to retirement and explains what situations they can expect to encounter. We make this video available to CERN personnel to stimulate their own thinking on the subject.
Preparation à la retraite - Preparing for retirement
None
2018-05-07
Retirement implies an important change from a working environment to a new lifestyle. Every individual copes with this transition in his own way. In this video, registered already a few years ago, Dr. Sartorius from WHO addresses some of his colleagues close to retirement and explains what situations they can expect to encounter. We make this video available to CERN personnel to stimulate their own thinking on the subject.
Feedforward Self-Modeling Enhances Skill Acquisition in Children Learning Trampoline Skills
Ste-Marie, Diane M.; Vertes, Kelly; Rymal, Amanda M.; Martini, Rose
2011-01-01
The purpose of this research was to examine whether children would benefit from a feedforward self-modeling (FSM) video and to explore possible explanatory mechanisms for the potential benefits, using a self-regulation framework. To this end, children were involved in learning two five-skill trampoline routines. For one of the routines, a FSM video was provided during acquisition, whereas only verbal instructions were provided for the alternate routine. The FSM involved editing video footage such that it showed the learner performing the trampoline routine at a higher skill level than their current capability. Analyses of the data showed that while physical performance benefits were observed for the routine that was learned with the FSM video, no differences were obtained in relation to the self-regulatory measures. Thus, the FSM video enhanced motor skill acquisition, but this could not be explained by changes to the varied self-regulatory processes examined. PMID:21779270
Introducing a Public Stereoscopic 3D High Dynamic Range (SHDR) Video Database
NASA Astrophysics Data System (ADS)
Banitalebi-Dehkordi, Amin
2017-03-01
High dynamic range (HDR) displays and cameras are paving their ways through the consumer market at a rapid growth rate. Thanks to TV and camera manufacturers, HDR systems are now becoming available commercially to end users. This is taking place only a few years after the blooming of 3D video technologies. MPEG/ITU are also actively working towards the standardization of these technologies. However, preliminary research efforts in these video technologies are hammered by the lack of sufficient experimental data. In this paper, we introduce a Stereoscopic 3D HDR database of videos that is made publicly available to the research community. We explain the procedure taken to capture, calibrate, and post-process the videos. In addition, we provide insights on potential use-cases, challenges, and research opportunities, implied by the combination of higher dynamic range of the HDR aspect, and depth impression of the 3D aspect.
Engelhardt, Christopher R; Bartholow, Bruce D; Saults, J Scott
2011-01-01
Although numerous experiments have shown that exposure to violent video games (VVG) causes increases in aggression, relatively few studies have investigated the extent to which this effect differs as a function of theoretically relevant individual difference factors. This study investigated whether video game content differentially influences aggression as a function of individual differences in trait anger. Participants were randomly assigned to play a violent or nonviolent video game before completing a task in which they could behave aggressively. Results showed that participants high in trait anger were the most aggressive, but only if they first played a VVG. This relationship held while statistically controlling for dimensions other than violent content on which game conditions differed (e.g. frustration, arousal). Implications of these findings for models explaining the effects of video games on behavior are discussed. © 2011 Wiley Periodicals, Inc.
Feedforward self-modeling enhances skill acquisition in children learning trampoline skills.
Ste-Marie, Diane M; Vertes, Kelly; Rymal, Amanda M; Martini, Rose
2011-01-01
The purpose of this research was to examine whether children would benefit from a feedforward self-modeling (FSM) video and to explore possible explanatory mechanisms for the potential benefits, using a self-regulation framework. To this end, children were involved in learning two five-skill trampoline routines. For one of the routines, a FSM video was provided during acquisition, whereas only verbal instructions were provided for the alternate routine. The FSM involved editing video footage such that it showed the learner performing the trampoline routine at a higher skill level than their current capability. Analyses of the data showed that while physical performance benefits were observed for the routine that was learned with the FSM video, no differences were obtained in relation to the self-regulatory measures. Thus, the FSM video enhanced motor skill acquisition, but this could not be explained by changes to the varied self-regulatory processes examined.
Xiao, Y; MacKenzie, C; Orasanu, J; Spencer, R; Rahman, A; Gunawardane, V
1999-01-01
To determine what information sources are used during a remote diagnosis task. Experienced trauma care providers viewed segments of videotaped initial trauma patient resuscitation and airway management. Experiment 1 collected responses from anesthesiologists to probing questions during and after the presentation of recorded video materials. Experiment 2 collected the responses from three types of care providers (anesthesiologists, nurses, and surgeons). Written and verbal responses were scored according to detection of critical events in video materials and categorized according to their content. Experiment 3 collected visual scanning data using an eyetracker during the viewing of recorded video materials from the three types of care providers. Eye-gaze data were analyzed in terms of focus on various parts of the videotaped materials. Care providers were found to be unable to detect several critical events. The three groups of subjects studied (anesthesiologists, nurses, and surgeons) focused on different aspects of videotaped materials. When the remote events and activities are multidisciplinary and rapidly changing, experts linked with audio-video-data connections may encounter difficulties in comprehending remote activities, and their information usage may be biased. Special training is needed for the remote decision-maker to appreciate tasks outside his or her speciality and beyond the boundaries of traditional divisions of labor.
Wells, Sue; Kerr, Andrew; Broadbent, Elizabeth; MacKenzie, Craig; Cole, Karl; McLachlan, Andy
2011-03-01
Explaining what cardiovascular disease (CVD) risk means and engaging in shared decision-making regarding risk factor modification is challenging. An electronic CVD risk visualisation tool containing multiple risk communication strategies (Your Heart Forecast) was designed in 2009. To assess whether this tool facilitated explaining CVD risk to primary care patients. Health professionals who accessed a Primary Health Organisation website or who attended educational peer groups over a three-month period were invited to complete questionnaires before and after viewing a four-minute video about the tool. Respondents were asked to make an informed guess of the CVD risk of a 35-year-old patient (actual CVD risk 5%) and rate the following sentence as being true or false: 'If there were 100 people like Mr Andrews, five would go on to have a cardiac event in the next five years.' They also were asked to rank their understanding of CVD risk and confidence in explaining the concept to patients. Fifty health professionals (37 GPs, 12 practice nurses, one other) completed before and after questionnaires. Respondents' CVD risk estimates pre-video ranged from <5% to 25% and nine rated the sentence as being false. After the video, all respondents answered these questions correctly. Personal rankings from zero to 10 about understanding CVD risk and confidence in explaining risk reduced in range and shifted towards greater efficacy. Whether this tool facilitates discussions of CVD risk with patients and improves patient understanding and lifestyle behaviour needs to be evaluated in a randomised trial.
Gao, Bin; Li, Xiaoqing; Woo, Wai Lok; Tian, Gui Yun
2018-05-01
Thermographic inspection has been widely applied to non-destructive testing and evaluation with the capabilities of rapid, contactless, and large surface area detection. Image segmentation is considered essential for identifying and sizing defects. To attain a high-level performance, specific physics-based models that describe defects generation and enable the precise extraction of target region are of crucial importance. In this paper, an effective genetic first-order statistical image segmentation algorithm is proposed for quantitative crack detection. The proposed method automatically extracts valuable spatial-temporal patterns from unsupervised feature extraction algorithm and avoids a range of issues associated with human intervention in laborious manual selection of specific thermal video frames for processing. An internal genetic functionality is built into the proposed algorithm to automatically control the segmentation threshold to render enhanced accuracy in sizing the cracks. Eddy current pulsed thermography will be implemented as a platform to demonstrate surface crack detection. Experimental tests and comparisons have been conducted to verify the efficacy of the proposed method. In addition, a global quantitative assessment index F-score has been adopted to objectively evaluate the performance of different segmentation algorithms.
Inferring segmented dense motion layers using 5D tensor voting.
Min, Changki; Medioni, Gérard
2008-09-01
We present a novel local spatiotemporal approach to produce motion segmentation and dense temporal trajectories from an image sequence. A common representation of image sequences is a 3D spatiotemporal volume, (x,y,t), and its corresponding mathematical formalism is the fiber bundle. However, directly enforcing the spatiotemporal smoothness constraint is difficult in the fiber bundle representation. Thus, we convert the representation into a new 5D space (x,y,t,vx,vy) with an additional velocity domain, where each moving object produces a separate 3D smooth layer. The smoothness constraint is now enforced by extracting 3D layers using the tensor voting framework in a single step that solves both correspondence and segmentation simultaneously. Motion segmentation is achieved by identifying those layers, and the dense temporal trajectories are obtained by converting the layers back into the fiber bundle representation. We proceed to address three applications (tracking, mosaic, and 3D reconstruction) that are hard to solve from the video stream directly because of the segmentation and dense matching steps, but become straightforward with our framework. The approach does not make restrictive assumptions about the observed scene or camera motion and is therefore generally applicable. We present results on a number of data sets.
An objective comparison of cell-tracking algorithms.
Ulman, Vladimír; Maška, Martin; Magnusson, Klas E G; Ronneberger, Olaf; Haubold, Carsten; Harder, Nathalie; Matula, Pavel; Matula, Petr; Svoboda, David; Radojevic, Miroslav; Smal, Ihor; Rohr, Karl; Jaldén, Joakim; Blau, Helen M; Dzyubachyk, Oleh; Lelieveldt, Boudewijn; Xiao, Pengdong; Li, Yuexiang; Cho, Siu-Yeung; Dufour, Alexandre C; Olivo-Marin, Jean-Christophe; Reyes-Aldasoro, Constantino C; Solis-Lemus, Jose A; Bensch, Robert; Brox, Thomas; Stegmaier, Johannes; Mikut, Ralf; Wolf, Steffen; Hamprecht, Fred A; Esteves, Tiago; Quelhas, Pedro; Demirel, Ömer; Malmström, Lars; Jug, Florian; Tomancak, Pavel; Meijering, Erik; Muñoz-Barrutia, Arrate; Kozubek, Michal; Ortiz-de-Solorzano, Carlos
2017-12-01
We present a combined report on the results of three editions of the Cell Tracking Challenge, an ongoing initiative aimed at promoting the development and objective evaluation of cell segmentation and tracking algorithms. With 21 participating algorithms and a data repository consisting of 13 data sets from various microscopy modalities, the challenge displays today's state-of-the-art methodology in the field. We analyzed the challenge results using performance measures for segmentation and tracking that rank all participating methods. We also analyzed the performance of all of the algorithms in terms of biological measures and practical usability. Although some methods scored high in all technical aspects, none obtained fully correct solutions. We found that methods that either take prior information into account using learning strategies or analyze cells in a global spatiotemporal video context performed better than other methods under the segmentation and tracking scenarios included in the challenge.
3D noise-resistant segmentation and tracking of unknown and occluded objects using integral imaging
NASA Astrophysics Data System (ADS)
Aloni, Doron; Jung, Jae-Hyun; Yitzhaky, Yitzhak
2017-10-01
Three dimensional (3D) object segmentation and tracking can be useful in various computer vision applications, such as: object surveillance for security uses, robot navigation, etc. We present a method for 3D multiple-object tracking using computational integral imaging, based on accurate 3D object segmentation. The method does not employ object detection by motion analysis in a video as conventionally performed (such as background subtraction or block matching). This means that the movement properties do not significantly affect the detection quality. The object detection is performed by analyzing static 3D image data obtained through computational integral imaging With regard to previous works that used integral imaging data in such a scenario, the proposed method performs the 3D tracking of objects without prior information about the objects in the scene, and it is found efficient under severe noise conditions.
The tissue-selecting technique: segmental stapled hemorrhoidopexy.
Lin, Hong-Cheng; Lian, Lei; Xie, Shang-Kui; Peng, Hui; Tai, Jian-Dong; Ren, Dong-Lin
2013-11-01
We describe a technique for the management of prolapsing hemorrhoids, with the aim to minimize the risk of anal stricture and rectovaginal fistula and to reduce the impact of the stapling technique on rectal compliance. This modified procedure was successfully applied in China, and preliminary data showed promising outcomes (see Video, Supplemental Digital Content 1, http://links.lww.com/DCR/A117).
Lewis & Clark: The Journey of the Corps of Discovery. Teacher's Guide and Video Segment Index.
ERIC Educational Resources Information Center
Public Broadcasting Service, Washington, DC.
This teacher's guide accompanies the Public Broadcasting System (PBS) four-part videotape documentary about the journey of Meriwether Lewis and William Clark as they made their way from the Missouri River to the Pacific Ocean. The guide introduces the documentary's major themes through 4 lessons which focus on the geography and events that shaped…
Social Implications of Music Videos for Youth: An Analysis of the Content and Effects of MTV.
ERIC Educational Resources Information Center
Greeson, Larry E.; Williams, Rose Ann
1986-01-01
Seventh- and tenth-grade students were shown segments of Music Television (MTV), then asked to respond to a brief attitude survey on parental influence, premarital sex, violence, drug use, and the influence of MTV. Results suggest the potentially powerful influence of popular music and MTV, especially on attitudes towards violence and premarital…
Born To Read: How To Nurture a Baby's Love of Learning. [Videotape and Planner's Manual].
ERIC Educational Resources Information Center
Association for Library Service to Children, Chicago, IL.
The "Born To Read" project helps parents raise children with healthy bodies and minds. Public librarians and health care professionals form partnerships and reach out to at-risk expectant and new parents. The video provides techniques and tips to plan successful programs for babies, including a segment for libraries to use with the…
Lentle, Roger G.; Hulls, Corrin M.
2018-01-01
The uses and limitations of the various techniques of video spatiotemporal mapping based on change in diameter (D-type ST maps), change in longitudinal strain rate (L-type ST maps), change in area strain rate (A-type ST maps), and change in luminous intensity of reflected light (I-maps) are described, along with their use in quantifying motility of the wall of hollow structures of smooth muscle such as the gut. Hence ST-methods for determining the size, speed of propagation and frequency of contraction in the wall of gut compartments of differing geometric configurations are discussed. We also discuss the shortcomings and problems that are inherent in the various methods and the use of techniques to avoid or minimize them. This discussion includes, the inability of D-type ST maps to indicate the site of a contraction that does not reduce the diameter of a gut segment, the manipulation of axis [the line of interest (LOI)] of L-maps to determine the true axis of propagation of a contraction, problems with anterior curvature of gut segments and the use of adjunct image analysis techniques that enhance particular features of the maps. PMID:29686624
A Motion Detection Algorithm Using Local Phase Information
Lazar, Aurel A.; Ukani, Nikul H.; Zhou, Yiyin
2016-01-01
Previous research demonstrated that global phase alone can be used to faithfully represent visual scenes. Here we provide a reconstruction algorithm by using only local phase information. We also demonstrate that local phase alone can be effectively used to detect local motion. The local phase-based motion detector is akin to models employed to detect motion in biological vision, for example, the Reichardt detector. The local phase-based motion detection algorithm introduced here consists of two building blocks. The first building block measures/evaluates the temporal change of the local phase. The temporal derivative of the local phase is shown to exhibit the structure of a second order Volterra kernel with two normalized inputs. We provide an efficient, FFT-based algorithm for implementing the change of the local phase. The second processing building block implements the detector; it compares the maximum of the Radon transform of the local phase derivative with a chosen threshold. We demonstrate examples of applying the local phase-based motion detection algorithm on several video sequences. We also show how the locally detected motion can be used for segmenting moving objects in video scenes and compare our local phase-based algorithm to segmentation achieved with a widely used optic flow algorithm. PMID:26880882
Multipurpose Use of Explain Everything iPad App for Teaching Chemistry Courses
ERIC Educational Resources Information Center
Ranga, Jayashree S.
2018-01-01
Explain Everything is an interactive, user-friendly, and easily accessible app for mobile devices. The interactive app-based teaching methods discussed here can be adopted in any STEM or non-STEM course. This app allows instructors to take advantage of both the chalkboard and PowerPoint slides on a single platform, create videos for lecture…
Explaining Global Women's Empowerment Using Geographic Inquiry
ERIC Educational Resources Information Center
Grubbs, Melanie R.
2018-01-01
It is difficult for students who are just being introduced to major geographical concepts to understand how relatively free countries like India or Mali can have such high levels of human rights abuses as child brides, dowry deaths, and domestic violence. Textbooks explain it and video clips show examples, but it still seems surreal to teenagers…
Melorheostosis may originate as a type 2 segmental manifestation of osteopoikilosis.
Happle, Rudolf
2004-03-15
Melorheostosis is a non-hereditary disorder involving the bones in a segmental pattern, whereas osteopoikilosis is a rather mild disseminated bone disorder inherited as an autosomal dominant trait. Interestingly, melorheostosis and osteopoikilosis may sometimes occur together. In analogy to various autosomal dominant skin disorders for which a type 2 segmental manifestation has been postulated, melorheostosis may be best explained in such cases as a type 2 segmental osteopoikilosis, resulting from early loss of the corresponding wild type allele at the gene locus of this autosomal dominant bone disorder. Copyright 2003 Wiley-Liss, Inc.
Morphogenesis of the second pharyngeal arch cartilage (Reichert's cartilage) in human embryos
Rodríguez-Vázquez, J F; Mérida-Velasco, J R; Verdugo-López, S; Sánchez-Montesinos, I; Mérida-Velasco, J A
2006-01-01
This study was performed on 50 human embryos and fetuses between 7 and 17 weeks of development. Reichert's cartilage is formed in the second pharyngeal arch in two segments. The longer cranial or styloid segment is continuous with the otic capsule; its inferior end is angulated and is situated very close to the oropharynx. The smaller caudal segment is in contact with the body and greater horn of the hyoid cartilaginous structure. No cartilage forms between these segments. The persistent angulation of the inferior end of the cranial or styloid segment of Reichert's cartilage and its important neurovascular relationships may help explain the symptomatology of Eagle's syndrome. PMID:16441562
Development of MPEG standards for 3D and free viewpoint video
NASA Astrophysics Data System (ADS)
Smolic, Aljoscha; Kimata, Hideaki; Vetro, Anthony
2005-11-01
An overview of 3D and free viewpoint video is given in this paper with special focus on related standardization activities in MPEG. Free viewpoint video allows the user to freely navigate within real world visual scenes, as known from virtual worlds in computer graphics. Suitable 3D scene representation formats are classified and the processing chain is explained. Examples are shown for image-based and model-based free viewpoint video systems, highlighting standards conform realization using MPEG-4. Then the principles of 3D video are introduced providing the user with a 3D depth impression of the observed scene. Example systems are described again focusing on their realization based on MPEG-4. Finally multi-view video coding is described as a key component for 3D and free viewpoint video systems. MPEG is currently working on a new standard for multi-view video coding. The conclusion is that the necessary technology including standard media formats for 3D and free viewpoint is available or will be available in the near future, and that there is a clear demand from industry and user side for such applications. 3DTV at home and free viewpoint video on DVD will be available soon, and will create huge new markets.
Bavelier, Daphne; Green, C. Shawn; Han, Doug Hyun; Renshaw, Perry F.; Merzenich, Michael M.; Gentile, Douglas A.
2015-01-01
The popular press is replete with stories about the effects of video and computer games on the brain. Sensationalist headlines claiming that video games ‘damage the brain’ or ‘boost brain power’ do not do justice to the complexities and limitations of the studies involved, and create a confusing overall picture about the effects of gaming on the brain. Here, six experts in the field shed light on our current understanding of the positive and negative ways in which playing video games can affect cognition and behaviour, and explain how this knowledge can be harnessed for educational and rehabilitation purposes. As research in this area is still in its early days, the contributors of this Viewpoint also discuss several issues and challenges that should be addressed to move the field forward. PMID:22095065
Teaching Caregivers to Administer Eye Drops, Transdermal Patches, and Suppositories.
Lindauer, Allison; Sexson, Kathryn; Harvath, Theresa A
2017-01-01
This article is the third in a series, Supporting Family Caregivers: No Longer Home Alone, published in collaboration with the AARP Public Policy Institute. Results of focus groups conducted as part of the AARP Public Policy Institute's No Longer Home Alone video project supported evidence that family caregivers aren't being given the information they need to manage the complex care regimens of their family members. This series of articles and accompanying videos aims to help nurses provide caregivers with the tools they need to manage their family member's medications. Each article explains the principles nurses should consider and reinforce with caregivers and is accompanied by a video for the caregiver to watch. The third video can be accessed at http://links.lww.com/AJN/A76.
Teaching Caregivers to Administer Eye Drops, Transdermal Patches, and Suppositories.
Lindauer, Allison; Sexson, Kathryn; Harvath, Theresa A
2017-05-01
: This article is the third in a series, Supporting Family Caregivers: No Longer Home Alone, published in collaboration with the AARP Public Policy Institute. Results of focus groups conducted as part of the AARP Public Policy Institute's No Longer Home Alone video project supported evidence that family caregivers aren't being given the information they need to manage the complex care regimens of their family members. This series of articles and accompanying videos aims to help nurses provide caregivers with the tools they need to manage their family member's medications. Each article explains the principles nurses should consider and reinforce with caregivers and is accompanied by a video for the caregiver to watch. The third video can be accessed at http://links.lww.com/AJN/A76.
Acoustic Investigations into the Later Acquisition of Syllabic "-es" Plurals
ERIC Educational Resources Information Center
Mealings, Kiri T.; Cox. Felicity; Demuth, Katherine
2013-01-01
Purpose: Children acquire /-ez/ syllabic plurals (e.g., buses) later than /-s, -z/ segmental plurals (e.g., cats,dogs). In this study, the authors explored whether increased syllable number or segmental factors best explains poorer performance with syllabic plurals. Method: An elicited imitation experiment was conducted with 14 two-year-olds…
The effects of video compression on acceptability of images for monitoring life sciences experiments
NASA Astrophysics Data System (ADS)
Haines, Richard F.; Chuang, Sherry L.
1992-07-01
Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters according to scientific discipline and experiment type is critical to the success of remote experiments.
The effects of video compression on acceptability of images for monitoring life sciences experiments
NASA Technical Reports Server (NTRS)
Haines, Richard F.; Chuang, Sherry L.
1992-01-01
Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters according to scientific discipline and experiment type is critical to the success of remote experiments.
Context indexing of digital cardiac ultrasound records in PACS
NASA Astrophysics Data System (ADS)
Lobodzinski, S. Suave; Meszaros, Georg N.
1998-07-01
Recent wide adoption of the DICOM 3.0 standard by ultrasound equipment vendors created a need for practical clinical implementations of cardiac imaging study visualization, management and archiving, DICOM 3.0 defines only a logical and physical format for exchanging image data (still images, video, patient and study demographics). All DICOM compliant imaging studies must presently be archived on a 650 Mb recordable compact disk. This is a severe limitation for ultrasound applications where studies of 3 to 10 minutes long are a common practice. In addition, DICOM digital echocardiography objects require physiological signal indexing, content segmentation and characterization. Since DICOM 3.0 is an interchange standard only, it does not define how to database composite video objects. The goal of this research was therefore to address the issues of efficient storage, retrieval and management of DICOM compliant cardiac video studies in a distributed PACS environment. Our Web based implementation has the advantage of accommodating both DICOM defined entity-relation modules (equipment data, patient data, video format, etc.) in standard relational database tables and digital indexed video with its attributes in an object relational database. Object relational data model facilitates content indexing of full motion cardiac imaging studies through bi-directional hyperlink generation that tie searchable video attributes and related objects to individual video frames in the temporal domain. Benefits realized from use of bi-directionally hyperlinked data models in an object relational database include: (1) real time video indexing during image acquisition, (2) random access and frame accurate instant playback of previously recorded full motion imaging data, and (3) time savings from faster and more accurate access to data through multiple navigation mechanisms such as multidimensional queries on an index, queries on a hyperlink attribute, free search and browsing.
ERIC Educational Resources Information Center
Bahan, Sandranel
2000-01-01
Describes a lesson for students in grades 8-12 where they watch a video from the PBS "American Experience" series that used the diary of Martha Moore Ballard, a U.S. midwife, to create a docudrama. Explains that the video helps students understand the harshness of life in the United States during the eighteenth-century. (CMK)
Gonté, Frédéric; Dupuy, Christophe; Luong, Bruno; Frank, Christoph; Brast, Roland; Sedghi, Baback
2009-11-10
The primary mirror of the future European Extremely Large Telescope will be equipped with 984 hexagonal segments. The alignment of the segments in piston, tip, and tilt within a few nanometers requires an optical phasing sensor. A test bench has been designed to study four different optical phasing sensor technologies. The core element of the test bench is an active segmented mirror composed of 61 flat hexagonal segments with a size of 17 mm side to side. Each of them can be controlled in piston, tip, and tilt by three piezoactuators with a precision better than 1 nm. The context of this development, the requirements, the design, and the integration of this system are explained. The first results on the final precision obtained in closed-loop control are also presented.
Expedient range enhanced 3-D robot colour vision
NASA Astrophysics Data System (ADS)
Jarvis, R. A.
1983-01-01
Computer vision has been chosen, in many cases, as offering the richest form of sensory information which can be utilized for guiding robotic manipulation. The present investigation is concerned with the problem of three-dimensional (3D) visual interpretation of colored objects in support of robotic manipulation of those objects with a minimum of semantic guidance. The scene 'interpretations' are aimed at providing basic parameters to guide robotic manipulation rather than to provide humans with a detailed description of what the scene 'means'. Attention is given to overall system configuration, hue transforms, a connectivity analysis, plan/elevation segmentations, range scanners, elevation/range segmentation, higher level structure, eye in hand research, and aspects of array and video stream processing.
Parish, Sharon J; Weber, Catherine M; Steiner-Grossman, Penny; Milan, Felise B; Burton, William B; Marantz, Paul R
2006-01-01
Video review is a valuable educational tool for teaching communication skills. Many studies have demonstrated its efficacy with individual learners, but few studies have addressed its use in a group format. To assess the educational benefits of group versus individual video review of standardized patient encounters through the evaluations of 4th-year students at the Albert Einstein College of Medicine. Students (128) who participated in a 7-station, standardized patient, clinical competency exam were randomly assigned to an individual or small group video review of selected segments of these encounters in 2000-2001. Students filled out an anonymous 13-item questionnaire assessing the experience and provided open-ended responses. With both review formats, most students had a positive learning experience (80%), found it less stressful than they expected (67%), and would not have preferred to do the review the other way (84%). Students randomized to individual reviews had a significantly higher level of satisfaction with the amount of time for the session (91% vs. 78%, p < .05) and the amount of feedback they received (95% vs. 79%, p = .01) and were more likely to view the session as a positive learning experience (88% vs. 73%, p < .05). Students in the individual review format were more likely to choose self-assessed weak segments (63% vs. 49%, p = .01). Students' comments indicated that they appreciated the value of peer review in a group setting. Although both group reviews and individual reviews of videotaped standardized patient encounters were received well by the students, there were several statistical differences in favor of the individual format.
Informative frame detection from wireless capsule video endoscopic images
NASA Astrophysics Data System (ADS)
Bashar, Md. Khayrul; Mori, Kensaku; Suenaga, Yasuhito; Kitasaka, Takayuki; Mekada, Yoshito
2008-03-01
Wireless capsule endoscopy (WCE) is a new clinical technology permitting the visualization of the small bowel, the most difficult segment of the digestive tract. The major drawback of this technology is the high amount of time for video diagnosis. In this study, we propose a method for informative frame detection by isolating useless frames that are substantially covered by turbid fluids or their contamination with other materials, e.g., faecal, semi-processed or unabsorbed foods etc. Such materials and fluids present a wide range of colors, from brown to yellow, and/or bubble-like texture patterns. The detection scheme, therefore, consists of two stages: highly contaminated non-bubbled (HCN) frame detection and significantly bubbled (SB) frame detection. Local color moments in the Ohta color space are used to characterize HCN frames, which are isolated by the Support Vector Machine (SVM) classifier in Stage-1. The rest of the frames go to the Stage-2, where Laguerre gauss Circular Harmonic Functions (LG-CHFs) extract the characteristics of the bubble-structures in a multi-resolution framework. An automatic segmentation method is designed to extract the bubbled regions based on local absolute energies of the CHF responses, derived from the grayscale version of the original color image. Final detection of the informative frames is obtained by using threshold operation on the extracted regions. An experiment with 20,558 frames from the three videos shows the excellent average detection accuracy (96.75%) by the proposed method, when compared with the Gabor based- (74.29%) and discrete wavelet based features (62.21%).
How do passion for video games and needs frustration explain time spent gaming?
Mills, Devin J; Milyavskaya, Marina; Mettler, Jessica; Heath, Nancy L; Derevensky, Jeffrey L
2018-04-01
Research applying self-determination theory and the dualistic model of passion (DMP) has shown video games may satisfy basic psychological needs (i.e., competence, autonomy, and relatedness) and be identified as a passion. The DMP distinguishes between healthy or harmonious passion and problematic or obsessive passion (OP), with the latter reflecting an overreliance towards one's passion to obtain needs satisfaction. The experience of daily obstructions to needs satisfaction, or needs frustration (NF), may facilitate such an overreliance. This study explored how NF and both types of passion explain the amount of time that university students spend gaming. The overall association between NF and time spent gaming was not significant. However, for video game users with low levels of OP for gaming, there was a significant negative association between NF and time spent gaming. Additionally, evidence of a mutually reinforcing association between NF and OP for gaming indicates that a vicious cycle exists, whereby a strong OP for gaming predicts and is reinforced by greater NF. The theoretical implications are discussed. © 2018 The British Psychological Society.
NASA Astrophysics Data System (ADS)
Pimentel, Maria Da Graça C.; Cattelan, Renan G.; Melo, Erick L.; Freitas, Giliard B.; Teixeira, Cesar A.
In earlier work we proposed the Watch-and-Comment (WaC) paradigm as the seamless capture of multimodal comments made by one or more users while watching a video, resulting in the automatic generation of multimedia documents specifying annotated interactive videos. The aim is to allow services to be offered by applying document engineering techniques to the multimedia document generated automatically. The WaC paradigm was demonstrated with a WaCTool prototype application which supports multimodal annotation over video frames and segments, producing a corresponding interactive video. In this chapter, we extend the WaC paradigm to consider contexts in which several viewers may use their own mobile devices while watching and commenting on an interactive-TV program. We first review our previous work. Next, we discuss scenarios in which mobile users can collaborate via the WaC paradigm. We then present a new prototype application which allows users to employ their mobile devices to collaboratively annotate points of interest in video and interactive-TV programs. We also detail the current software infrastructure which supports our new prototype; the infrastructure extends the Ginga middleware for the Brazilian Digital TV with an implementation of the UPnP protocol - the aim is to provide the seamless integration of the users' mobile devices into the TV environment. As a result, the work reported in this chapter defines the WaC paradigm for the mobile-user as an approach to allow the collaborative annotation of the points of interest in video and interactive-TV programs.
MRI Segmentation of the Human Brain: Challenges, Methods, and Applications
Despotović, Ivana
2015-01-01
Image segmentation is one of the most important tasks in medical image analysis and is often the first and the most critical step in many clinical applications. In brain MRI analysis, image segmentation is commonly used for measuring and visualizing the brain's anatomical structures, for analyzing brain changes, for delineating pathological regions, and for surgical planning and image-guided interventions. In the last few decades, various segmentation techniques of different accuracy and degree of complexity have been developed and reported in the literature. In this paper we review the most popular methods commonly used for brain MRI segmentation. We highlight differences between them and discuss their capabilities, advantages, and limitations. To address the complexity and challenges of the brain MRI segmentation problem, we first introduce the basic concepts of image segmentation. Then, we explain different MRI preprocessing steps including image registration, bias field correction, and removal of nonbrain tissue. Finally, after reviewing different brain MRI segmentation methods, we discuss the validation problem in brain MRI segmentation. PMID:25945121
Flexible methods for segmentation evaluation: results from CT-based luggage screening.
Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry
2014-01-01
Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms' behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms.
Beaton, L.; Mazzaferri, J.; Lalonde, F.; Hidalgo-Aguirre, M.; Descovich, D.; Lesk, M. R.; Costantino, S.
2015-01-01
We have developed a novel optical approach to determine pulsatile ocular volume changes using automated segmentation of the choroid, which, together with Dynamic Contour Tonometry (DCT) measurements of intraocular pressure (IOP), allows estimation of the ocular rigidity (OR) coefficient. Spectral Domain Optical Coherence Tomography (OCT) videos were acquired with Enhanced Depth Imaging (EDI) at 7Hz during ~50 seconds at the fundus. A novel segmentation algorithm based on graph search with an edge-probability weighting scheme was developed to measure choroidal thickness (CT) at each frame. Global ocular volume fluctuations were derived from frame-to-frame CT variations using an approximate eye model. Immediately after imaging, IOP and ocular pulse amplitude (OPA) were measured using DCT. OR was calculated from these peak pressure and volume changes. Our automated segmentation algorithm provides the first non-invasive method for determining ocular volume change due to pulsatile choroidal filling, and the estimation of the OR constant. Future applications of this method offer an important avenue to understanding the biomechanical basis of ocular pathophysiology. PMID:26137373
The IXV Ground Segment design, implementation and operations
NASA Astrophysics Data System (ADS)
Martucci di Scarfizzi, Giovanni; Bellomo, Alessandro; Musso, Ivano; Bussi, Diego; Rabaioli, Massimo; Santoro, Gianfranco; Billig, Gerhard; Gallego Sanz, José María
2016-07-01
The Intermediate eXperimental Vehicle (IXV) is an ESA re-entry demonstrator that performed, on the 11th February of 2015, a successful re-entry demonstration mission. The project objectives were the design, development, manufacturing and on ground and in flight verification of an autonomous European lifting and aerodynamically controlled re-entry system. For the IXV mission a dedicated Ground Segment was provided. The main subsystems of the IXV Ground Segment were: IXV Mission Control Center (MCC), from where monitoring of the vehicle was performed, as well as support during pre-launch and recovery phases; IXV Ground Stations, used to cover IXV mission by receiving spacecraft telemetry and forwarding it toward the MCC; the IXV Communication Network, deployed to support the operations of the IXV mission by interconnecting all remote sites with MCC, supporting data, voice and video exchange. This paper describes the concept, architecture, development, implementation and operations of the ESA Intermediate Experimental Vehicle (IXV) Ground Segment and outlines the main operations and lessons learned during the preparation and successful execution of the IXV Mission.
Segmentation of the glottal space from laryngeal images using the watershed transform.
Osma-Ruiz, Víctor; Godino-Llorente, Juan I; Sáenz-Lechón, Nicolás; Fraile, Rubén
2008-04-01
The present work describes a new method for the automatic detection of the glottal space from laryngeal images obtained either with high speed or with conventional video cameras attached to a laryngoscope. The detection is based on the combination of several relevant techniques in the field of digital image processing. The image is segmented with a watershed transform followed by a region merging, while the final decision is taken using a simple linear predictor. This scheme has successfully segmented the glottal space in all the test images used. The method presented can be considered a generalist approach for the segmentation of the glottal space because, in contrast with other methods found in literature, this approach does not need either initialization or finding strict environmental conditions extracted from the images to be processed. Therefore, the main advantage is that the user does not have to outline the region of interest with a mouse click. In any case, some a priori knowledge about the glottal space is needed, but this a priori knowledge can be considered weak compared to the environmental conditions fixed in former works.
Learning Electron Transport Chain Process in Photosynthesis Using Video and Serious Game
NASA Astrophysics Data System (ADS)
Espinoza Morales, Cecilia
This research investigates students' learning about the electron transport chain (ETC) process in photosynthesis by watching a video followed by playing a serious board game-Electron Chute- that models the ETC process. To accomplish this goal, several learning outcomes regarding the misconceptions students' hold about photosynthesis and the ETC process in photosynthesis were defined. Middle school students need opportunities to develop cohesive models that explain the mechanistic processes of biological systems to support their learning. A six-week curriculum on photosynthesis included a one day learning activity using an ETC video and the Electron Chute game to model the ETC process. The ETC model explained how sunlight energy was converted to chemical energy (ATP) at the molecular level involving a flow of electrons. The learning outcomes and the experiences were developed based on the Indiana Academic Standards for biology and the Next Generation Science Standards (NGSS) for the life sciences. Participants were 120 eighth grade science students from an urban public school. The participants were organized into six classes based on their level of academic readiness, regular and challenge, by the school corporation. Four classes were identified as regular classes and two of them as challenge classes. Students in challenge classes had the opportunity to be challenged with more difficult content knowledge and required higher level thinking skills. The regular classes were the mainstream at school. A quasi-experimental design known as non-equivalent group design (NEGD) was used in this study. This experimental design consisted of a pretest-posttest experiment in two similar groups to begin with-the video only and video+game treatments. Intact classes were distributed into the treatments. The video only watched the ETC video and the video+game treatment watched the ETC video and played the Electron Chute game. The instrument (knowledge test) consisted of a multiple-choice section addressing general knowledge of photosynthesis and specific knowledge about ETC, and an essay section where students were asked to interpret each part of a diagram about the ETC process. Considering only the effect of treatments on score gain, regular and challenge groups reached higher scores in the posttest in comparison to the pretest after playing Electron Chute in both section of the test. However, the effect of treatments between the classes for each treatment was inconclusive. In the essay, the score gain was higher in the challenge than the regular class, but there was not a significant difference between both classes in the multiple-choice section. In regard to the learning outcomes, the initial model provided by the ETC video was mostly effective on addressing the misconception related to the oxygen production, which derives from the photolysis -or splitting-of the water molecules. Playing Electron Chute was effective on addressing most of the misconceptions targeted in the instruction design used for study. Most of these misconceptions were related to ATP and NADPH production and the cell structures where the ETC process takes place. At the end of the video+game learning treatment, a survey was used to collect data about students' experiences while playing the game. The majority of students agreed that playing the game increased their ability to explain how plants use light energy, but only about a third of them felt they could explain how ETC worked. Enjoyment and need for more explanations were different between students who attended the regular and challenge classes. The majority of the students who attended a regular class indicated they liked the ETC video and playing Electron Chute, percentage of agreement that was significantly higher than students who attended the challenge class. As a result, more students in the regular class indicated an interest in learning other science concepts like ETC. Students who attended the regular class reported that clear rules about how to play the game were helpful for learning. Further, the challenge group indicated the video and the Electron Chute game could include more explanations. These results suggest the video and game learning experience has the potential for engaging students’ interest in science when they participated in a regular class. This study also demonstrates a principled approach for designing a video and game to illustrate important methods for creating content knowledge that supports students’ ability to make sense of how complex systems work. Through more refinements of the game, the learning experiences could be a viable learning experience that accommodates the needs of a diverse population of students who might prefer different learning methods.
Dr. Peter Cavanaugh Explains the Need and Operation of the FOOT Experiment
NASA Technical Reports Server (NTRS)
2003-01-01
This video clip is an interview with Dr. Peter Cavanaugh, principal investigator for the FOOT experiment. He explains the reasoning behind the experiment and shows some video clips of the FOOT experiment being calibrated and conducted in orbit. The heart of the FOOT experiment is an instrumented suit called the Lower Extremity Monitoring Suit (LEMS). This customized garment is a pair of Lycra cycling tights incorporating 20 carefully placed sensors and the associated wiring control units, and amplifiers. LEMS enables the electrical activity of the muscles, the angular motions of the hip, knee, and ankle joints, and the force under both feet to be measured continuously. Measurements are also made on the arm muscles. Information from the sensors can be recorded up to 14 hours on a small, wearable computer.
TRECVID: the utility of a content-based video retrieval evaluation
NASA Astrophysics Data System (ADS)
Hauptmann, Alexander G.
2006-01-01
TRECVID, an annual retrieval evaluation benchmark organized by NIST, encourages research in information retrieval from digital video. TRECVID benchmarking covers both interactive and manual searching by end users, as well as the benchmarking of some supporting technologies including shot boundary detection, extraction of semantic features, and the automatic segmentation of TV news broadcasts. Evaluations done in the context of the TRECVID benchmarks show that generally, speech transcripts and annotations provide the single most important clue for successful retrieval. However, automatically finding the individual images is still a tremendous and unsolved challenge. The evaluations repeatedly found that none of the multimedia analysis and retrieval techniques provide a significant benefit over retrieval using only textual information such as from automatic speech recognition transcripts or closed captions. In interactive systems, we do find significant differences among the top systems, indicating that interfaces can make a huge difference for effective video/image search. For interactive tasks efficient interfaces require few key clicks, but display large numbers of images for visual inspection by the user. The text search finds the right context region in the video in general, but to select specific relevant images we need good interfaces to easily browse the storyboard pictures. In general, TRECVID has motivated the video retrieval community to be honest about what we don't know how to do well (sometimes through painful failures), and has focused us to work on the actual task of video retrieval, as opposed to flashy demos based on technological capabilities.
Infrared video based gas leak detection method using modified FAST features
NASA Astrophysics Data System (ADS)
Wang, Min; Hong, Hanyu; Huang, Likun
2018-03-01
In order to detect the invisible leaking gas that is usually dangerous and easily leads to fire or explosion in time, many new technologies have arisen in the recent years, among which the infrared video based gas leak detection is widely recognized as a viable tool. However, all the moving regions of a video frame can be detected as leaking gas regions by the existing infrared video based gas leak detection methods, without discriminating the property of each detected region, e.g., a walking person in a video frame may be also detected as gas by the current gas leak detection methods.To solve this problem, we propose a novel infrared video based gas leak detection method in this paper, which is able to effectively suppress strong motion disturbances.Firstly, the Gaussian mixture model(GMM) is used to establish the background model.Then due to the observation that the shapes of gas regions are different from most rigid moving objects, we modify the Features From Accelerated Segment Test (FAST) algorithm and use the modified FAST (mFAST) features to describe each connected component. In view of the fact that the statistical property of the mFAST features extracted from gas regions is different from that of other motion regions, we propose the Pixel-Per-Points (PPP) condition to further select candidate connected components.Experimental results show that the algorithm is able to effectively suppress most strong motion disturbances and achieve real-time leaking gas detection.
ERIC Educational Resources Information Center
Kearney, Matthew; Treagust, David F.; Yeo, Shelley; Zadnik, Marjan G.
2001-01-01
Discusses student and teacher perceptions of a new development in the use of the predict-observe-explain (POE) strategy. This development involves the incorporation of POE tasks into a multimedia computer program that uses real-life, digital video clips of difficult, expensive, time consuming, or dangerous scenarios as stimuli for these tasks.…
Kychakoff, George [Maple Valley, WA; Afromowitz, Martin A [Mercer Island, WA; Hogle, Richard E [Olympia, WA
2008-10-14
A system for detection and control of deposition on pendant tubes in recovery and power boilers includes one or more deposit monitoring sensors operating in infrared regions of about 4 or 8.7 microns and directly producing images of the interior of the boiler, or producing feeding signals to a data processing system for information to enable a distributed control system by which the boilers are operated to operate said boilers more efficiently. The data processing system includes an image pre-processing circuit in which a 2-D image formed by the video data input is captured, and includes a low pass filter for performing noise filtering of said video input. It also includes an image compensation system for array compensation to correct for pixel variation and dead cells, etc., and for correcting geometric distortion. An image segmentation module receives a cleaned image from the image pre-processing circuit for separating the image of the recovery boiler interior into background, pendant tubes, and deposition. It also accomplishes thresholding/clustering on gray scale/texture and makes morphological transforms to smooth regions, and identifies regions by connected components. An image-understanding unit receives a segmented image sent from the image segmentation module and matches derived regions to a 3-D model of said boiler. It derives a 3-D structure the deposition on pendant tubes in the boiler and provides the information about deposits to the plant distributed control system for more efficient operation of the plant pendant tube cleaning and operating systems.
Multi-frame super-resolution with quality self-assessment for retinal fundus videos.
Köhler, Thomas; Brost, Alexander; Mogalle, Katja; Zhang, Qianyi; Köhler, Christiane; Michelson, Georg; Hornegger, Joachim; Tornow, Ralf P
2014-01-01
This paper proposes a novel super-resolution framework to reconstruct high-resolution fundus images from multiple low-resolution video frames in retinal fundus imaging. Natural eye movements during an examination are used as a cue for super-resolution in a robust maximum a-posteriori scheme. In order to compensate heterogeneous illumination on the fundus, we integrate retrospective illumination correction for photometric registration to the underlying imaging model. Our method utilizes quality self-assessment to provide objective quality scores for reconstructed images as well as to select regularization parameters automatically. In our evaluation on real data acquired from six human subjects with a low-cost video camera, the proposed method achieved considerable enhancements of low-resolution frames and improved noise and sharpness characteristics by 74%. In terms of image analysis, we demonstrate the importance of our method for the improvement of automatic blood vessel segmentation as an example application, where the sensitivity was increased by 13% using super-resolution reconstruction.
Egocentric Temporal Action Proposals.
Shao Huang; Weiqiang Wang; Shengfeng He; Lau, Rynson W H
2018-02-01
We present an approach to localize generic actions in egocentric videos, called temporal action proposals (TAPs), for accelerating the action recognition step. An egocentric TAP refers to a sequence of frames that may contain a generic action performed by the wearer of a head-mounted camera, e.g., taking a knife, spreading jam, pouring milk, or cutting carrots. Inspired by object proposals, this paper aims at generating a small number of TAPs, thereby replacing the popular sliding window strategy, for localizing all action events in the input video. To this end, we first propose to temporally segment the input video into action atoms, which are the smallest units that may contain an action. We then apply a hierarchical clustering algorithm with several egocentric cues to generate TAPs. Finally, we propose two actionness networks to score the likelihood of each TAP containing an action. The top ranked candidates are returned as output TAPs. Experimental results show that the proposed TAP detection framework performs significantly better than relevant approaches for egocentric action detection.
STS-114: Discovery Return to Flight: Langley Engineers Analysis Briefing
NASA Technical Reports Server (NTRS)
2005-01-01
This video features a briefing on NASA Langley Research Center (LaRC) contributions to the Space Shuttle fleet's Return to Flight (RTF). The briefing is split into two sections, which LaRC Shuttle Project Manager Robert Barnes and Deputy Manager Harry Belvin deliver in the form of a viewgraph presentation. Barnes speaks about LaRC contributions to the STS-114 mission of Space Shuttle Discovery, and Belvin speaks about LaRC contributions to subsequent Shuttle missions. In both sections of the briefing, LaRC contributions are in the following areas: External Tank (ET), Orbiter, Systems Integration, and Corrosion/Aging. The managers discuss nondestructive and destructive tests performed on ET foam, wing leading edge reinforced carbon-carbon (RCC) composites, on-orbit tile repair, aerothermodynamic simulation of reentry effects, Mission Management Team (MMT) support, and landing gear tests. The managers briefly answer questions from reporters, and the video concludes with several short video segments about LaRC contributions to the RTF effort.
Dispatches from the Dirt Lab: The Art of Science Communication
NASA Astrophysics Data System (ADS)
Kutcha, Matt
2014-05-01
The variety of media currently available provides more opportunities to science communicators than ever before. However, this variety can also work against the goals of science communication by diluting an individual message with thousands of others, limiting the communicator's ability to focus on an effective method, and fragmenting an already distracted audience. In addition, the technology used for content delivery may not be accessible to everyone. "Dispatches from the Dirt Lab" is a series of short (ca. 6 minutes) Internet videos centered on earth and soil science concepts. The initial goal was to condense several topics worth of classroom demonstrations into one video segment to serve as an example for educators to use in their own classrooms. As a method of science communication in their own right, they integrate best practices from classrooms and laboratories, science visualization, and even improvisational theater. This presentation will include a short example of the style and content found in the videos, and also discuss the rationale behind them.
Neural networks for sign language translation
NASA Astrophysics Data System (ADS)
Wilson, Beth J.; Anspach, Gretel
1993-09-01
A neural network is used to extract relevant features of sign language from video images of a person communicating in American Sign Language or Signed English. The key features are hand motion, hand location with respect to the body, and handshape. A modular hybrid design is under way to apply various techniques, including neural networks, in the development of a translation system that will facilitate communication between deaf and hearing people. One of the neural networks described here is used to classify video images of handshapes into their linguistic counterpart in American Sign Language. The video image is preprocessed to yield Fourier descriptors that encode the shape of the hand silhouette. These descriptors are then used as inputs to a neural network that classifies their shapes. The network is trained with various examples from different signers and is tested with new images from new signers. The results have shown that for coarse handshape classes, the network is invariant to the type of camera used to film the various signers and to the segmentation technique.
Optical Fabrication and Measurement: AR&C and NGST
NASA Technical Reports Server (NTRS)
Martin, Greg; Engelhaupt, Darell
1997-01-01
The need exists at MSFC for research and development within three major areas: (1) Automated Rendezvous and Capture (AR&C) including Video Guidance System (VGS); (2) Next Generation Space Telescope, (NGST); and (3) replicated optics. AR&C/VGS is a laser retroreflection guidance and tracking device which is used from the shuttle to provide video information regarding deployment and guidance of released satellites. NGST is the next large telescope for space to complement Hubble Space Telescope. This will be larger than HST and may be produced in segments to be assembled and aligned in space utilizing advanced mechanisms and materials. The replicated optics will involve a variety of advanced procedures and materials to produce x-ray collimating as well as imaging telescopes and optical components.
Identification and annotation of erotic film based on content analysis
NASA Astrophysics Data System (ADS)
Wang, Donghui; Zhu, Miaoliang; Yuan, Xin; Qian, Hui
2005-02-01
The paper brings forward a new method for identifying and annotating erotic films based on content analysis. First, the film is decomposed to video and audio stream. Then, the video stream is segmented into shots and key frames are extracted from each shot. We filter the shots that include potential erotic content by finding the nude human body in key frames. A Gaussian model in YCbCr color space for detecting skin region is presented. An external polygon that covered the skin regions is used for the approximation of the human body. Last, we give the degree of the nudity by calculating the ratio of skin area to whole body area with weighted parameters. The result of the experiment shows the effectiveness of our method.
Complete Scene Recovery and Terrain Classification in Textured Terrain Meshes
Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae
2012-01-01
Terrain classification allows a mobile robot to create an annotated map of its local environment from the three-dimensional (3D) and two-dimensional (2D) datasets collected by its array of sensors, including a GPS receiver, gyroscope, video camera, and range sensor. However, parts of objects that are outside the measurement range of the range sensor will not be detected. To overcome this problem, this paper describes an edge estimation method for complete scene recovery and complete terrain reconstruction. Here, the Gibbs-Markov random field is used to segment the ground from 2D videos and 3D point clouds. Further, a masking method is proposed to classify buildings and trees in a terrain mesh. PMID:23112653
On continuous user authentication via typing behavior.
Roth, Joseph; Liu, Xiaoming; Metaxas, Dimitris
2014-10-01
We hypothesize that an individual computer user has a unique and consistent habitual pattern of hand movements, independent of the text, while typing on a keyboard. As a result, this paper proposes a novel biometric modality named typing behavior (TB) for continuous user authentication. Given a webcam pointing toward a keyboard, we develop real-time computer vision algorithms to automatically extract hand movement patterns from the video stream. Unlike the typical continuous biometrics, such as keystroke dynamics (KD), TB provides a reliable authentication with a short delay, while avoiding explicit key-logging. We collect a video database where 63 unique subjects type static text and free text for multiple sessions. For one typing video, the hands are segmented in each frame and a unique descriptor is extracted based on the shape and position of hands, as well as their temporal dynamics in the video sequence. We propose a novel approach, named bag of multi-dimensional phrases, to match the cross-feature and cross-temporal pattern between a gallery sequence and probe sequence. The experimental results demonstrate a superior performance of TB when compared with KD, which, together with our ultrareal-time demo system, warrant further investigation of this novel vision application and biometric modality.
Novel dynamic caching for hierarchically distributed video-on-demand systems
NASA Astrophysics Data System (ADS)
Ogo, Kenta; Matsuda, Chikashi; Nishimura, Kazutoshi
1998-02-01
It is difficult to simultaneously serve the millions of video streams that will be needed in the age of 'Mega-Media' networks by using only one high-performance server. To distribute the service load, caching servers should be location near users. However, in previously proposed caching mechanisms, the grade of service depends on whether the data is already cached at a caching server. To make the caching servers transparent to the users, the ability to randomly access the large volume of data stored in the central server should be supported, and the operational functions of the provided service should not be narrowly restricted. We propose a mechanism for constructing a video-stream-caching server that is transparent to the users and that will always support all special playback functions for all available programs to all the contents with a latency of only 1 or 2 seconds. This mechanism uses Variable-sized-quantum-segment- caching technique derived from an analysis of the historical usage log data generated by a line-on-demand-type service experiment and based on the basic techniques used by a time- slot-based multiple-stream video-on-demand server.
Motion adaptive Kalman filter for super-resolution
NASA Astrophysics Data System (ADS)
Richter, Martin; Nasse, Fabian; Schröder, Hartmut
2011-01-01
Superresolution is a sophisticated strategy to enhance image quality of both low and high resolution video, performing tasks like artifact reduction, scaling and sharpness enhancement in one algorithm, all of them reconstructing high frequency components (above Nyquist frequency) in some way. Especially recursive superresolution algorithms can fulfill high quality aspects because they control the video output using a feed-back loop and adapt the result in the next iteration. In addition to excellent output quality, temporal recursive methods are very hardware efficient and therefore even attractive for real-time video processing. A very promising approach is the utilization of Kalman filters as proposed by Farsiu et al. Reliable motion estimation is crucial for the performance of superresolution. Therefore, robust global motion models are mainly used, but this also limits the application of superresolution algorithm. Thus, handling sequences with complex object motion is essential for a wider field of application. Hence, this paper proposes improvements by extending the Kalman filter approach using motion adaptive variance estimation and segmentation techniques. Experiments confirm the potential of our proposal for ideal and real video sequences with complex motion and further compare its performance to state-of-the-art methods like trainable filters.
PeakVizor: Visual Analytics of Peaks in Video Clickstreams from Massive Open Online Courses.
Chen, Qing; Chen, Yuanzhe; Liu, Dongyu; Shi, Conglei; Wu, Yingcai; Qu, Huamin
2016-10-01
Massive open online courses (MOOCs) aim to facilitate open-access and massive-participation education. These courses have attracted millions of learners recently. At present, most MOOC platforms record the web log data of learner interactions with course videos. Such large amounts of multivariate data pose a new challenge in terms of analyzing online learning behaviors. Previous studies have mainly focused on the aggregate behaviors of learners from a summative view; however, few attempts have been made to conduct a detailed analysis of such behaviors. To determine complex learning patterns in MOOC video interactions, this paper introduces a comprehensive visualization system called PeakVizor. This system enables course instructors and education experts to analyze the "peaks" or the video segments that generate numerous clickstreams. The system features three views at different levels: the overview with glyphs to display valuable statistics regarding the peaks detected; the flow view to present spatio-temporal information regarding the peaks; and the correlation view to show the correlation between different learner groups and the peaks. Case studies and interviews conducted with domain experts have demonstrated the usefulness and effectiveness of PeakVizor, and new findings about learning behaviors in MOOC platforms have been reported.
NASA Technical Reports Server (NTRS)
Maney, Tucker; Hamburger, Henry
1993-01-01
VIS/ACT is a multi-media educational system for aircrew coordination training (ACT). Students view video segments, answer questions that are adjusted to individual performance, and engage in related activities. Although the system puts the student in a reactive critiquing role, it has proved effective in improving performance on active targeted ACT skills, in group simulation tasks. VIS/ACT itself is the product of coordination among three Navy agencies.
Tier-Adjacency Is Not a Necessary Condition for Learning Phonotactic Dependencies
ERIC Educational Resources Information Center
Koo, Hahn; Callahan, Lydia
2012-01-01
One hypothesis raised by Newport and Aslin to explain how speakers learn dependencies between nonadjacent phonemes is that speakers track bigram probabilities between two segments that are adjacent to each other within a tier of their own. The hypothesis predicts that a dependency between segments separated from each other at the tier level cannot…
Collective Behaviour in Video Viewing: A Thermodynamic Analysis of Gaze Position.
Burleson-Lesser, Kate; Morone, Flaviano; DeGuzman, Paul; Parra, Lucas C; Makse, Hernán A
2017-01-01
Videos and commercials produced for large audiences can elicit mixed opinions. We wondered whether this diversity is also reflected in the way individuals watch the videos. To answer this question, we presented 65 commercials with high production value to 25 individuals while recording their eye movements, and asked them to provide preference ratings for each video. We find that gaze positions for the most popular videos are highly correlated. To explain the correlations of eye movements, we model them as "interactions" between individuals. A thermodynamic analysis of these interactions shows that they approach a "critical" point such that any stronger interaction would put all viewers into lock-step and any weaker interaction would fully randomise patterns. At this critical point, groups with similar collective behaviour in viewing patterns emerge while maintaining diversity between groups. Our results suggest that popularity of videos is already evident in the way we look at them, and that we maintain diversity in viewing behaviour even as distinct patterns of groups emerge. Our results can be used to predict popularity of videos and commercials at the population level from the collective behaviour of the eye movements of a few viewers.
Review methods for image segmentation from computed tomography images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mamat, Nurwahidah; Rahman, Wan Eny Zarina Wan Abdul; Soh, Shaharuddin Cik
Image segmentation is a challenging process in order to get the accuracy of segmentation, automation and robustness especially in medical images. There exist many segmentation methods that can be implemented to medical images but not all methods are suitable. For the medical purposes, the aims of image segmentation are to study the anatomical structure, identify the region of interest, measure tissue volume to measure growth of tumor and help in treatment planning prior to radiation therapy. In this paper, we present a review method for segmentation purposes using Computed Tomography (CT) images. CT images has their own characteristics that affectmore » the ability to visualize anatomic structures and pathologic features such as blurring of the image and visual noise. The details about the methods, the goodness and the problem incurred in the methods will be defined and explained. It is necessary to know the suitable segmentation method in order to get accurate segmentation. This paper can be a guide to researcher to choose the suitable segmentation method especially in segmenting the images from CT scan.« less
Marcoux, J; Rossignol, S
2000-11-15
After an acute low thoracic spinal transection (T13), cats can be made to walk with the hindlimbs on a treadmill with clonidine, an alpha2-noradrenergic agonist. Because previous studies of neonatal rat spinal cord in vitro suggest that the most important lumbar segments for rhythmogenesis are L1-L2, we investigated the role of various lumbar segments in the initiation of walking movements on a treadmill of adult cats spinalized (T13), 5-6 d earlier. The locomotor activities were evaluated from electromyographic and video recordings. The results show that: (1) localized topical application of clonidine in restricted baths over either the L3-L4 or the L5-L7 segments was sufficient to induce walking movements. Yohimbine, an alpha2-noradrenergic antagonist, could block this locomotion when applied over L3-L4 or L5-L7; (2) microinjections of clonidine in one or two lumbar segments from L3 to L5 could also induce locomotion; (3) after an intravenous injection of clonidine, locomotion was blocked by microinjections of yohimbine in segments L3, L4, or L5 but not if the injection was in L6; (4) locomotion was also blocked in all cases by additional spinal transections at L3 or L4. These results show that it is possible to initiate walking in the adult spinal cat with a pharmacological stimulation of a restricted number of lumbar segments and also that the integrity of the L3-L4 segments is necessary to sustain the locomotor activity.
Methods for 2-D and 3-D Endobronchial Ultrasound Image Segmentation.
Zang, Xiaonan; Bascom, Rebecca; Gilbert, Christopher; Toth, Jennifer; Higgins, William
2016-07-01
Endobronchial ultrasound (EBUS) is now commonly used for cancer-staging bronchoscopy. Unfortunately, EBUS is challenging to use and interpreting EBUS video sequences is difficult. Other ultrasound imaging domains, hampered by related difficulties, have benefited from computer-based image-segmentation methods. Yet, so far, no such methods have been proposed for EBUS. We propose image-segmentation methods for 2-D EBUS frames and 3-D EBUS sequences. Our 2-D method adapts the fast-marching level-set process, anisotropic diffusion, and region growing to the problem of segmenting 2-D EBUS frames. Our 3-D method builds upon the 2-D method while also incorporating the geodesic level-set process for segmenting EBUS sequences. Tests with lung-cancer patient data showed that the methods ran fully automatically for nearly 80% of test cases. For the remaining cases, the only user-interaction required was the selection of a seed point. When compared to ground-truth segmentations, the 2-D method achieved an overall Dice index = 90.0% ±4.9%, while the 3-D method achieved an overall Dice index = 83.9 ± 6.0%. In addition, the computation time (2-D, 0.070 s/frame; 3-D, 0.088 s/frame) was two orders of magnitude faster than interactive contour definition. Finally, we demonstrate the potential of the methods for EBUS localization in a multimodal image-guided bronchoscopy system.
Interactive segmentation of tongue contours in ultrasound video sequences using quality maps
NASA Astrophysics Data System (ADS)
Ghrenassia, Sarah; Ménard, Lucie; Laporte, Catherine
2014-03-01
Ultrasound (US) imaging is an effective and non invasive way of studying the tongue motions involved in normal and pathological speech, and the results of US studies are of interest for the development of new strategies in speech therapy. State-of-the-art tongue shape analysis techniques based on US images depend on semi-automated tongue segmentation and tracking techniques. Recent work has mostly focused on improving the accuracy of the tracking techniques themselves. However, occasional errors remain inevitable, regardless of the technique used, and the tongue tracking process must thus be supervised by a speech scientist who will correct these errors manually or semi-automatically. This paper proposes an interactive framework to facilitate this process. In this framework, the user is guided towards potentially problematic portions of the US image sequence by a segmentation quality map that is based on the normalized energy of an active contour model and automatically produced during tracking. When a problematic segmentation is identified, corrections to the segmented contour can be made on one image and propagated both forward and backward in the problematic subsequence, thereby improving the user experience. The interactive tools were tested in combination with two different tracking algorithms. Preliminary results illustrate the potential of the proposed framework, suggesting that the proposed framework generally improves user interaction time, with little change in segmentation repeatability.
Tolchinsky, Anatol; Jefferson, Stephen D
2011-09-01
Although numerous benefits have been uncovered related to moderate video game play, research suggests that problematic video game playing behaviors can cause problems in the lives of some video game players. To further our understanding of this phenomenon, we investigated how problematic video game playing symptoms are related to an assortment of variables, including time management skills and attention-deficit/hyperactivity disorder (ADHD) symptoms. Additionally, we tested several simple mediation/moderation models to better explain previous theories that posit simple correlations between these variables. As expected, the results from the present study indicated that time management skills appeared to mediate the relationship between ADHD symptoms and problematic play endorsement (though only for men). Unexpectedly, we found that ADHD symptoms appeared to mediate the relation between time management skills and problematic play behaviors; however, this was only found for women in our sample. Finally, future implications are discussed.
NASA Technical Reports Server (NTRS)
McCarty, Kaley Corinne
2013-01-01
One of the projects that I am completing this summer is a Launch Services Program intern 'How to' set up a clean room informational video. The purpose of this video is to go along with a clean room kit that can be checked out by employees at the Kennedy Space Center and to be taken to classrooms to help educate students and intrigue them about NASA. The video will include 'how to' set up and operate a clean room at NASA. This is a group project so we will be acting as a team and contributing our own input and ideas. We will include various activities for children in classrooms to complete, while learning and having fun. Activities that we will explain and film include: helping children understand the proper way to wear a bunny suit, a brief background on cleanrooms, and the importance of maintaining the cleanliness of a space craft. This project will be shown to LSP management and co-workers; we will be presenting the video once it is completed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duberstein, Corey A.; Matzner, Shari; Cullinan, Valerie I.
Surveying wildlife at risk from offshore wind energy development is difficult and expensive. Infrared video can be used to record birds and bats that pass through the camera view, but it is also time consuming and expensive to review video and determine what was recorded. We proposed to conduct algorithm and software development to identify and to differentiate thermally detected targets of interest that would allow automated processing of thermal image data to enumerate birds, bats, and insects. During FY2012 we developed computer code within MATLAB to identify objects recorded in video and extract attribute information that describes the objectsmore » recorded. We tested the efficiency of track identification using observer-based counts of tracks within segments of sample video. We examined object attributes, modeled the effects of random variability on attributes, and produced data smoothing techniques to limit random variation within attribute data. We also began drafting and testing methodology to identify objects recorded on video. We also recorded approximately 10 hours of infrared video of various marine birds, passerine birds, and bats near the Pacific Northwest National Laboratory (PNNL) Marine Sciences Laboratory (MSL) at Sequim, Washington. A total of 6 hours of bird video was captured overlooking Sequim Bay over a series of weeks. An additional 2 hours of video of birds was also captured during two weeks overlooking Dungeness Bay within the Strait of Juan de Fuca. Bats and passerine birds (swallows) were also recorded at dusk on the MSL campus during nine evenings. An observer noted the identity of objects viewed through the camera concurrently with recording. These video files will provide the information necessary to produce and test software developed during FY2013. The annotation will also form the basis for creation of a method to reliably identify recorded objects.« less
An Experiment with Public-Oriented Knowledge Transfer: A Video on Quebec's Bill 10
Lemoine, Marie-Ève; Laliberté, Maude
2016-01-01
When decision-makers are engaged in a polarized discourse and leaving aside evidence-based recommendations, is there a role for researchers in the dissemination of this scientific evidence to the general public as a means to counterbalance the debate? In response to the controversial Bill 10 in Quebec, we developed and posted a knowledge transfer video on YouTube to help stimulate critical public debate. This article explains our approach and methodology, and the impact of the video, which, in the space of two weeks, had more than 9,500 views, demonstrating the pertinence of such initiatives. We conclude with recommendations for other research groups to engage in public debates. PMID:27232235
Automated multiple target detection and tracking in UAV videos
NASA Astrophysics Data System (ADS)
Mao, Hongwei; Yang, Chenhui; Abousleman, Glen P.; Si, Jennie
2010-04-01
In this paper, a novel system is presented to detect and track multiple targets in Unmanned Air Vehicles (UAV) video sequences. Since the output of the system is based on target motion, we first segment foreground moving areas from the background in each video frame using background subtraction. To stabilize the video, a multi-point-descriptor-based image registration method is performed where a projective model is employed to describe the global transformation between frames. For each detected foreground blob, an object model is used to describe its appearance and motion information. Rather than immediately classifying the detected objects as targets, we track them for a certain period of time and only those with qualified motion patterns are labeled as targets. In the subsequent tracking process, a Kalman filter is assigned to each tracked target to dynamically estimate its position in each frame. Blobs detected at a later time are used as observations to update the state of the tracked targets to which they are associated. The proposed overlap-rate-based data association method considers the splitting and merging of the observations, and therefore is able to maintain tracks more consistently. Experimental results demonstrate that the system performs well on real-world UAV video sequences. Moreover, careful consideration given to each component in the system has made the proposed system feasible for real-time applications.
An introduction to scriptwriting for video and multimedia.
Guth, J
1995-06-01
The elements of audiovisual productions are explained and illustrated, including words, moving images, still images, graphics, narration, music, landscape sounds, pacing and tilting and font styles. Three different production styles are analysed, and examples of those styles are discussed. Rules for writing spoken words, composing blocks of information, and explaining technical information to a lay audience are also provided. Storyboard and scripting forms and examples are included.
Surgical repair of sciatic nerve traumatic rupture: technical considerations and approaches.
Abou-Al-Shaar, Hussam; Yoon, Nam; Mahan, Mark A
2018-01-01
Traumatic proximal sciatic nerve rupture poses surgical repair dilemmas. Disruption often causes a large nerve gap after proximal neuroma and distal scar removal. Also, autologous graft material to bridge the segmental defect may be insufficient, given the sciatic nerve diameter. The authors utilized knee flexion to allow single neurorrhaphy repair of a large sciatic nerve defect, bringing healthy proximal stump to healthy distal segment. To avoid aberrant regeneration, the authors split the sciatic nerve into common peroneal and tibial divisions. After 3 months, the patient can fully extend the knee and has evidence of distal regeneration and nerve continuity without substantial injury. The video can be found here: https://youtu.be/lsezRT5I8MU .
Klein, Johannes; Leupold, Stefan; Biegler, Ilona; Biedendieck, Rebekka; Münch, Richard; Jahn, Dieter
2012-09-01
Time-lapse imaging in combination with fluorescence microscopy techniques enable the investigation of gene regulatory circuits and uncovered phenomena like culture heterogeneity. In this context, computational image processing for the analysis of single cell behaviour plays an increasing role in systems biology and mathematical modelling approaches. Consequently, we developed a software package with graphical user interface for the analysis of single bacterial cell behaviour. A new software called TLM-Tracker allows for the flexible and user-friendly interpretation for the segmentation, tracking and lineage analysis of microbial cells in time-lapse movies. The software package, including manual, tutorial video and examples, is available as Matlab code or executable binaries at http://www.tlmtracker.tu-bs.de.
Organ donation video messaging in motor vehicle offices: results of a randomized trial.
Rodrigue, James R; Fleishman, Aaron; Fitzpatrick, Sean; Boger, Matthew
2015-12-01
Since nearly all registered organ donors in the United States signed up via a driver's license transaction, motor vehicle (MV) offices represent an important venue for organ donation education. To evaluate the impact of organ donation video messaging in MV offices. A 2-group (usual care vs usual care+video messaging) randomized trial with baseline, intervention, and follow-up assessment phases. Twenty-eight MV offices in Massachusetts. Usual care comprised education of MV clerks, display of organ donation print materials (ie, posters, brochures, signing mats), and a volunteer ambassador program. The intervention included video messaging with silent (subtitled) segments highlighting individuals affected by donation, playing on a recursive loop on monitors in MV waiting rooms. Aggregate monthly donor designation rates at MV offices (primary) and percentage of MV customers who registered as donors after viewing the video (secondary). Controlling for baseline donor designation rate, analysis of covariance showed a significant group effect for intervention phase (F=7.3, P=.01). The usual-care group had a significantly higher aggregate monthly donor designation rate than the intervention group had. In the logistic regression model of customer surveys (n=912), prior donor designation (β=-1.29, odds ratio [OR]=0.27 [95% CI=0.20-0.37], P<.001), white race (β=0.57 OR=1.77 [95% CI=1.23-2.54], P=.002), and viewing the intervention video (β=0.73, OR=1.54 [95% CI=1.24-2.60], P=.01) were statistically significant predictors of donor registration on the day of the survey. The relatively low uptake of the video intervention by customers most likely contributed to the negative trial finding.
Flexible methods for segmentation evaluation: Results from CT-based luggage screening
Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry
2017-01-01
BACKGROUND Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms’ behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. OBJECTIVE To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. METHODS We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. RESULTS Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. CONCLUSIONS Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms. PMID:24699346
What problem are you working on?
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2013-11-21
Superconductors, supercomputers, new materials, clean energy, big science - ORNL researchers' work is multidisciplinary and world-leading. Hear them explain it in their own words in this video first shown at UT-Battelle's 2013 Awards Night.
What problem are you working on?
None
2018-05-07
Superconductors, supercomputers, new materials, clean energy, big science - ORNL researchers' work is multidisciplinary and world-leading. Hear them explain it in their own words in this video first shown at UT-Battelle's 2013 Awards Night.
Sines and Cosines. Part 2 of 3
NASA Technical Reports Server (NTRS)
Apostol, Tom M. (Editor)
1993-01-01
The Law of Sines and the Law of Cosines are introduced and demonstrated in this 'Project Mathematics' series video using both film footage and computer animation. This video deals primarily with the mathematical field of Trigonometry and explains how these laws were developed and their applications. One significant use is geographical and geological surveying. This includes both the triangulation method and the spirit leveling method. With these methods, it is shown how the height of the tallest mountain in the world, Mt. Everest, was determined.
Kilimanjaro through the keyhole.
Parkin, Amanda
2011-01-01
Amanda Parkin, marketing manager at approved design and build consultant to the NHS for integrated theatres and digital video communications, OR Networks, explains how, with the help of the company and its specialist equipment, Northumbrian surgeons successfully established a two-way audio/video link to their counterparts at a hospital in the foothills of Mount Kilimanjaro to enable them to train the Tanzanian surgeons in laparoscopic surgery. Alongside opening up many new teaching opportunities in both the UK and Tanzania, the link-up has already saved countless lives.
ePatients on YouTube: Analysis of Four Experiences From the Patients' Perspective
Gómez-Zúñiga, Beni; Pousada, Modesta; Hernández-Encuentra, Eulàlia; Armayones, Manuel
2012-01-01
Background Many patients share their personal experiences and opinions using online video platforms. These videos are watched by millions of health consumers and health care professionals. Although it has become a popular phenomenon, little is known about patients who share videos online and why they do so. Objective We aimed to explore the motivations and challenges faced by patients who share videos about their health and experiences on YouTube. As part of a conference discussion, we asked several patients actively engaged on YouTube to make a video explaining their motivations. This paper discusses these videos. Methods In this qualitative study, we performed an analysis of the videos created by 4 patients about their self-reported motivations and challenges they face as YouTube users. First, two judges compared the transcriptions and decided the exact wording when confusing content was found. Second, two judges categorized the content of the videos to identify the major themes. Results Four main categories emerged: (1) the origin or cause for making the first video, (2) the objectives that they achieve by continuing to make videos, (3) the perception of community, and (4) the negative consequences of the experience. Conclusions The main reason for making videos was to bridge the gap between traditional health information about their diseases and everyday life. The first consequence of sharing their life on YouTube was a loss of privacy. However, they also experienced the positive effects of expressing their feelings, being part of a large community of peers, and helping others to deal with a chronic condition. PMID:25075229
Military display performance parameters
NASA Astrophysics Data System (ADS)
Desjardins, Daniel D.; Meyer, Frederick
2012-06-01
The military display market is analyzed in terms of four of its segments: avionics, vetronics, dismounted soldier, and command and control. Requirements are summarized for a number of technology-driving parameters, to include luminance, night vision imaging system compatibility, gray levels, resolution, dimming range, viewing angle, video capability, altitude, temperature, shock and vibration, etc., for direct-view and virtual-view displays in cockpits and crew stations. Technical specifications are discussed for selected programs.
NASA Technical Reports Server (NTRS)
Kratochvil, D.; Bowyer, J.; Bhushan, C.; Steinnagel, K.; Kaushal, D.; Al-Kinani, G.
1983-01-01
Voice applications, data applications, video applications, impacted baseline forecasts, market distribution model, net long haul forecasts, trunking earth station definition and costs, trunking space segment cost, trunking entrance/exit links, trunking network costs and crossover distances with terrestrial tariffs, net addressable forecasts, capacity requirements, improving spectrum utilization, satellite system market development, and the 30/20 net accessible market are considered.
NASA Astrophysics Data System (ADS)
Kratochvil, D.; Bowyer, J.; Bhushan, C.; Steinnagel, K.; Kaushal, D.; Al-Kinani, G.
1983-09-01
Voice applications, data applications, video applications, impacted baseline forecasts, market distribution model, net long haul forecasts, trunking earth station definition and costs, trunking space segment cost, trunking entrance/exit links, trunking network costs and crossover distances with terrestrial tariffs, net addressable forecasts, capacity requirements, improving spectrum utilization, satellite system market development, and the 30/20 net accessible market are considered.
NASA Astrophysics Data System (ADS)
2002-09-01
Footage shows the crew of STS-112 (Jeffrey Ashby, Commander; Pamela Melroy, Pilot; David Wolf, Piers Sellers, Sandra Magnus, and Fyodor Yurchikhin, Mission Specialists) during several parts of their training. The video is arranged into short segments. In 'Topside Activities at the NBL', Wolf and Sellers are fitted with EVA suits for pool training. 'Pre-Launch Bailout Training in CCT II' shows all six crew members exiting from the hatch on a model of a shuttle orbiter cockpit. 'EVA Training in the VR Lab' shows a crew member training with a virtual reality simulator, interspersed with footage of Magnus, and Wolf with Melroy, at monitors. There is a 'Crew Photo Session', and 'Pam Melroy and Sandy Magnus at the SES Dome' also features a virtual reality simulator. The final two segments of the video involve hands-on training. 'Post Landing Egress at the FFT' shows the crew suiting up into their flight suits, and being raised on a harness, to practice rapelling from the cockpit hatch. 'EVA Prep and Post at the ISS Airlock' shows the crew assembling an empty EVA suit onboard a model of a module. The crew tests oxygen masks, and Sellers is shown on an exercise bicycle with an oxygen mask, with his heart rate monitored (not shown).
Vasconcelos, Francisco; Brandão, Patrick; Vercauteren, Tom; Ourselin, Sebastien; Deprest, Jan; Peebles, Donald; Stoyanov, Danail
2018-06-27
Intrauterine foetal surgery is the treatment option for several congenital malformations. For twin-to-twin transfusion syndrome (TTTS), interventions involve the use of laser fibre to ablate vessels in a shared placenta. The procedure presents a number of challenges for the surgeon, and computer-assisted technologies can potentially be a significant support. Vision-based sensing is the primary source of information from the intrauterine environment, and hence, vision approaches present an appealing approach for extracting higher level information from the surgical site. In this paper, we propose a framework to detect one of the key steps during TTTS interventions-ablation. We adopt a deep learning approach, specifically the ResNet101 architecture, for classification of different surgical actions performed during laser ablation therapy. We perform a two-fold cross-validation using almost 50 k frames from five different TTTS ablation procedures. Our results show that deep learning methods are a promising approach for ablation detection. To our knowledge, this is the first attempt at automating photocoagulation detection using video and our technique can be an important component of a larger assistive framework for enhanced foetal therapies. The current implementation does not include semantic segmentation or localisation of the ablation site, and this would be a natural extension in future work.
Colonoscopy tutorial software made with a cadaver's sectioned images.
Chung, Beom Sun; Chung, Min Suk; Park, Hyung Seon; Shin, Byeong-Seok; Kwon, Koojoo
2016-11-01
Novice doctors may watch tutorial videos in training for actual or computed tomographic (CT) colonoscopy. The conventional learning videos can be complemented by virtual colonoscopy software made with a cadaver's sectioned images (SIs). The objective of this study was to assist colonoscopy trainees with the new interactive software. Submucosal segmentation on the SIs was carried out through the whole length of the large intestine. With the SIs and segmented images, a three dimensional model was reconstructed. Six-hundred seventy-one proximal colonoscopic views (conventional views) and corresponding distal colonoscopic views (simulating the retroflexion of a colonoscope) were produced. Not only navigation views showing the current location of the colonoscope tip and its course, but also, supplementary description views were elaborated. The four corresponding views were put into convenient browsing software to be downloaded free from the homepage (anatomy.co.kr). The SI colonoscopy software with the realistic images and supportive tools was available to anybody. Users could readily notice the position and direction of the virtual colonoscope tip and recognize meaningful structures in colonoscopic views. The software is expected to be an auxiliary learning tool to improve technique and related knowledge in actual and CT colonoscopies. Hopefully, the software will be updated using raw images from the Visible Korean project. Copyright © 2016 Elsevier GmbH. All rights reserved.
Reconstructing the flight kinematics of swarming and mating in wild mosquitoes
Butail, Sachit; Manoukis, Nicholas; Diallo, Moussa; Ribeiro, José M.; Lehmann, Tovi; Paley, Derek A.
2012-01-01
We describe a novel tracking system for reconstructing three-dimensional tracks of individual mosquitoes in wild swarms and present the results of validating the system by filming swarms and mating events of the malaria mosquito Anopheles gambiae in Mali. The tracking system is designed to address noisy, low frame-rate (25 frames per second) video streams from a stereo camera system. Because flying A. gambiae move at 1–4 m s−1, they appear as faded streaks in the images or sometimes do not appear at all. We provide an adaptive algorithm to search for missing streaks and a likelihood function that uses streak endpoints to extract velocity information. A modified multi-hypothesis tracker probabilistically addresses occlusions and a particle filter estimates the trajectories. The output of the tracking algorithm is a set of track segments with an average length of 0.6–1 s. The segments are verified and combined under human supervision to create individual tracks up to the duration of the video (90 s). We evaluate tracking performance using an established metric for multi-target tracking and validate the accuracy using independent stereo measurements of a single swarm. Three-dimensional reconstructions of A. gambiae swarming and mating events are presented. PMID:22628212
NASA Technical Reports Server (NTRS)
2002-01-01
Footage shows the crew of STS-112 (Jeffrey Ashby, Commander; Pamela Melroy, Pilot; David Wolf, Piers Sellers, Sandra Magnus, and Fyodor Yurchikhin, Mission Specialists) during several parts of their training. The video is arranged into short segments. In 'Topside Activities at the NBL', Wolf and Sellers are fitted with EVA suits for pool training. 'Pre-Launch Bailout Training in CCT II' shows all six crew members exiting from the hatch on a model of a shuttle orbiter cockpit. 'EVA Training in the VR Lab' shows a crew member training with a virtual reality simulator, interspersed with footage of Magnus, and Wolf with Melroy, at monitors. There is a 'Crew Photo Session', and 'Pam Melroy and Sandy Magnus at the SES Dome' also features a virtual reality simulator. The final two segments of the video involve hands-on training. 'Post Landing Egress at the FFT' shows the crew suiting up into their flight suits, and being raised on a harness, to practice rapelling from the cockpit hatch. 'EVA Prep and Post at the ISS Airlock' shows the crew assembling an empty EVA suit onboard a model of a module. The crew tests oxygen masks, and Sellers is shown on an exercise bicycle with an oxygen mask, with his heart rate monitored (not shown).
Dactyl Alphabet Gesture Recognition in a Video Sequence Using Microsoft Kinect
NASA Astrophysics Data System (ADS)
Artyukhin, S. G.; Mestetskiy, L. M.
2015-05-01
This paper presents an efficient framework for solving the problem of static gesture recognition based on data obtained from the web cameras and depth sensor Kinect (RGB-D - data). Each gesture given by a pair of images: color image and depth map. The database store gestures by it features description, genereated by frame for each gesture of the alphabet. Recognition algorithm takes as input a video sequence (a sequence of frames) for marking, put in correspondence with each frame sequence gesture from the database, or decide that there is no suitable gesture in the database. First, classification of the frame of the video sequence is done separately without interframe information. Then, a sequence of successful marked frames in equal gesture is grouped into a single static gesture. We propose a method combined segmentation of frame by depth map and RGB-image. The primary segmentation is based on the depth map. It gives information about the position and allows to get hands rough border. Then, based on the color image border is specified and performed analysis of the shape of the hand. Method of continuous skeleton is used to generate features. We propose a method of skeleton terminal branches, which gives the opportunity to determine the position of the fingers and wrist. Classification features for gesture is description of the position of the fingers relative to the wrist. The experiments were carried out with the developed algorithm on the example of the American Sign Language. American Sign Language gesture has several components, including the shape of the hand, its orientation in space and the type of movement. The accuracy of the proposed method is evaluated on the base of collected gestures consisting of 2700 frames.
A Review on Segmentation of Positron Emission Tomography Images
Foster, Brent; Bagci, Ulas; Mansoor, Awais; Xu, Ziyue; Mollura, Daniel J.
2014-01-01
Positron Emission Tomography (PET), a non-invasive functional imaging method at the molecular level, images the distribution of biologically targeted radiotracers with high sensitivity. PET imaging provides detailed quantitative information about many diseases and is often used to evaluate inflammation, infection, and cancer by detecting emitted photons from a radiotracer localized to abnormal cells. In order to differentiate abnormal tissue from surrounding areas in PET images, image segmentation methods play a vital role; therefore, accurate image segmentation is often necessary for proper disease detection, diagnosis, treatment planning, and follow-ups. In this review paper, we present state-of-the-art PET image segmentation methods, as well as the recent advances in image segmentation techniques. In order to make this manuscript self-contained, we also briefly explain the fundamentals of PET imaging, the challenges of diagnostic PET image analysis, and the effects of these challenges on the segmentation results. PMID:24845019
Astronomy4Kids: A new, online, STEM-focused, video education outreach program
NASA Astrophysics Data System (ADS)
Pearson, Richard L.; Pearson, Sarah R.
2017-06-01
Recent research indicates significant benefits of early childhood introductions to language, mathematics, and general science concepts. Specifically, a child that is introduced to a concept at a young age is more prepared to receive it in its entirety later. Astronomy4Kids was created to bring science, technology, engineering, and math (STEM) concepts to the youngest learners (those under the age of eight, or those from pre-school to about second-grade). The videos are presented in a succinct, one-on-one manner, and provide a creative learning environment for the viewers. Following the preschool education video principles established by Fred Rogers, we hope to give young children access to an expert astronomer who can explain things simply and sincerely. We believe presenting the material in this manner will make it engaging for even the youngest scholar and available to any interested party. The videos can be freely accessed at www.astronomy4kids.net.
Dash Cam videos on YouTube™ offer insights into factors related to moose-vehicle collisions.
Rea, Roy V; Johnson, Chris J; Aitken, Daniel A; Child, Kenneth N; Hesse, Gayle
2018-03-26
To gain a better understanding of the dynamics of moose-vehicle collisions, we analyzed 96 videos of moose-vehicle interactions recorded by vehicle dash-mounted cameras (Dash Cams) that had been posted to the video-sharing website YouTube™. Our objective was to determine the effects of road conditions, season and weather, moose behavior, and driver response to actual collisions compared to near misses when the collision was avoided. We identified 11 variables that were consistently observable in each video and that we hypothesized would help to explain a collision or near miss. The most parsimonious logistic regression model contained variables for number of moose, sight time, vehicle slows, and vehicle swerves (AIC c w = 0.529). This model had good predictive accuracy (AUC = 0.860, SE = 0.041). The only statistically significant variable from this model that explained the difference between moose-vehicle collisions and near misses was 'Vehicle slows'. Our results provide no evidence that road surface conditions (dry, wet, ice or snow), roadside habitat type (forested or cleared), the extent to which roadside vegetation was cleared, natural light conditions (overcast, clear, twilight, dark), season (winter, spring and summer, fall), the presence of oncoming traffic, or the direction from which the moose entered the roadway had any influence on whether a motorist collided with a moose. Dash Cam videos posted to YouTube™ provide a unique source of data for road safety planners trying to understand what happens in the moments just before a moose-vehicle collision and how those factors may differ from moose-vehicle encounters that do not result in a collision. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Miller, Scott; Redman, S.
2009-01-01
Swan and Shih (2005) reported that the perceived presence of an instructor in an online course is influential in determining the satisfaction, if not the performance, of students in the course. To address this issue, we developed a series of 19 videos which not only demonstrate various astronomy concepts, but also provide the students with a voice, face and personality associated with the instructor and teaching assistant. To keep the students' attention throughout the videos, we also included humorous elements which involve the assistant (S. Redman) being injured in every video. These videos were first used during the Spring 2008 semester, when we taught an online course in introductory astronomy to almost 400 non-science majors at Penn State University. In order to assess the educational value of these videos, we presented identical questions to students of both the online course and a traditional face-to-face course which included Active-Collaborative Learning (ACL) taught by one of us (S. Miller), and received feedback from the online students via questionnaires. Students ranked our videos as moderately effective at explaining astronomical concepts as well as creating an instructor presence within the course. Compared to the ACL students, the online students performed equally well on questions related to topics covered in the videos. We also found a positive correlation between the effectiveness of the videos in creating an instructor presence and student attitudes towards the course. We discuss our approach to creating these videos, how they were used within an online course, students’ perception of the effectiveness of the videos, and their impact on student learning. You can find them by Googling "Astronomy 001" at video.google.com. We thank Digital Commons of Penn State for their assistance in producing the videos.
The Video Interaction Guidance approach applied to teaching communication skills in dentistry.
Quinn, S; Herron, D; Menzies, R; Scott, L; Black, R; Zhou, Y; Waller, A; Humphris, G; Freeman, R
2016-05-01
To examine dentists' views of a novel video review technique to improve communication skills in complex clinical situations. Dentists (n = 3) participated in a video review known as Video Interaction Guidance to encourage more attuned interactions with their patients (n = 4). Part of this process is to identify where dentists and patients reacted positively and effectively. Each dentist was presented with short segments of video footage taken during an appointment with a patient with intellectual disabilities and communication difficulties. Having observed their interactions with patients, dentists were asked to reflect on their communication strategies with the assistance of a trained VIG specialist. Dentists reflected that their VIG session had been insightful and considered the review process as beneficial to communication skills training in dentistry. They believed that this technique could significantly improve the way dentists interact and communicate with patients. The VIG sessions increased their awareness of the communication strategies they use with their patients and were perceived as neither uncomfortable nor threatening. The VIG session was beneficial in this exploratory investigation because the dentists could identify when their interactions were most effective. Awareness of their non-verbal communication strategies and the need to adopt these behaviours frequently were identified as key benefits of this training approach. One dentist suggested that the video review method was supportive because it was undertaken by a behavioural scientist rather than a professional counterpart. Some evidence supports the VIG approach in this specialist area of communication skills and dental training. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Robust real-time horizon detection in full-motion video
NASA Astrophysics Data System (ADS)
Young, Grace B.; Bagnall, Bryan; Lane, Corey; Parameswaran, Shibin
2014-06-01
The ability to detect the horizon on a real-time basis in full-motion video is an important capability to aid and facilitate real-time processing of full-motion videos for the purposes such as object detection, recognition and other video/image segmentation applications. In this paper, we propose a method for real-time horizon detection that is designed to be used as a front-end processing unit for a real-time marine object detection system that carries out object detection and tracking on full-motion videos captured by ship/harbor-mounted cameras, Unmanned Aerial Vehicles (UAVs) or any other method of surveillance for Maritime Domain Awareness (MDA). Unlike existing horizon detection work, we cannot assume a priori the angle or nature (for e.g. straight line) of the horizon, due to the nature of the application domain and the data. Therefore, the proposed real-time algorithm is designed to identify the horizon at any angle and irrespective of objects appearing close to and/or occluding the horizon line (for e.g. trees, vehicles at a distance) by accounting for its non-linear nature. We use a simple two-stage hierarchical methodology, leveraging color-based features, to quickly isolate the region of the image containing the horizon and then perform a more ne-grained horizon detection operation. In this paper, we present our real-time horizon detection results using our algorithm on real-world full-motion video data from a variety of surveillance sensors like UAVs and ship mounted cameras con rming the real-time applicability of this method and its ability to detect horizon with no a priori assumptions.
NASA Technical Reports Server (NTRS)
1994-01-01
This video contains two segments: one a 0:01:50 spot and the other a 0:08:21 feature. Dante 2, an eight-legged walking machine, is shown during field trials as it explores the inner depths of an active volcano at Mount Spurr, Alaska. A NASA sponsored team at Carnegie Mellon University built Dante to withstand earth's harshest conditions, to deliver a science payload to the interior of a volcano, and to report on its journey to the floor of a volcano. Remotely controlled from 80-miles away, the robot explored the inner depths of the volcano and information from onboard video cameras and sensors was relayed via satellite to scientists in Anchorage. There, using a computer generated image, controllers tracked the robot's movement. Ultimately the robot team hopes to apply the technology to future planetary missions.
Shi, Xiaoping; Wu, Yuehua; Rao, Calyampudi Radhakrishna
2018-06-05
The change-point detection has been carried out in terms of the Euclidean minimum spanning tree (MST) and shortest Hamiltonian path (SHP), with successful applications in the determination of authorship of a classic novel, the detection of change in a network over time, the detection of cell divisions, etc. However, these Euclidean graph-based tests may fail if a dataset contains random interferences. To solve this problem, we present a powerful non-Euclidean SHP-based test, which is consistent and distribution-free. The simulation shows that the test is more powerful than both Euclidean MST- and SHP-based tests and the non-Euclidean MST-based test. Its applicability in detecting both landing and departure times in video data of bees' flower visits is illustrated.
View of STS-129 MS3 Foreman during EVA2
2009-11-21
S129-E-007789 (21 Nov. 2009) --- Astronaut Mike Foreman, STS-129 mission specialist, participates in the mission's second session of extravehicular activity (EVA) as construction and maintenance continue on the International Space Station. During the six-hour, eight-minute spacewalk, Foreman and astronaut Randy Bresnik (out of frame), mission specialist, installed a Grappling Adaptor to On-Orbit Railing Assembly, or GATOR, on the Columbus laboratory. GATOR contains a ship-tracking antenna system and a HAM radio antenna. They relocated a floating potential measurement unit that gauges electric charges that build up on the station, deployed a Payload Attach System on the space-facing side of the Starboard 3 truss segment and installed a wireless video system that allows spacewalkers to transmit video to the station and relay it to Earth.
Event segmentation ability uniquely predicts event memory.
Sargent, Jesse Q; Zacks, Jeffrey M; Hambrick, David Z; Zacks, Rose T; Kurby, Christopher A; Bailey, Heather R; Eisenberg, Michelle L; Beck, Taylor M
2013-11-01
Memory for everyday events plays a central role in tasks of daily living, autobiographical memory, and planning. Event memory depends in part on segmenting ongoing activity into meaningful units. This study examined the relationship between event segmentation and memory in a lifespan sample to answer the following question: Is the ability to segment activity into meaningful events a unique predictor of subsequent memory, or is the relationship between event perception and memory accounted for by general cognitive abilities? Two hundred and eight adults ranging from 20 to 79years old segmented movies of everyday events and attempted to remember the events afterwards. They also completed psychometric ability tests and tests measuring script knowledge for everyday events. Event segmentation and script knowledge both explained unique variance in event memory above and beyond the psychometric measures, and did so as strongly in older as in younger adults. These results suggest that event segmentation is a basic cognitive mechanism, important for memory across the lifespan. Copyright © 2013 Elsevier B.V. All rights reserved.