Sample records for time video processing

  1. Practical, Real-Time, and Robust Watermarking on the Spatial Domain for High-Definition Video Contents

    NASA Astrophysics Data System (ADS)

    Kim, Kyung-Su; Lee, Hae-Yeoun; Im, Dong-Hyuck; Lee, Heung-Kyu

    Commercial markets employ digital right management (DRM) systems to protect valuable high-definition (HD) quality videos. DRM system uses watermarking to provide copyright protection and ownership authentication of multimedia contents. We propose a real-time video watermarking scheme for HD video in the uncompressed domain. Especially, our approach is in aspect of practical perspectives to satisfy perceptual quality, real-time processing, and robustness requirements. We simplify and optimize human visual system mask for real-time performance and also apply dithering technique for invisibility. Extensive experiments are performed to prove that the proposed scheme satisfies the invisibility, real-time processing, and robustness requirements against video processing attacks. We concentrate upon video processing attacks that commonly occur in HD quality videos to display on portable devices. These attacks include not only scaling and low bit-rate encoding, but also malicious attacks such as format conversion and frame rate change.

  2. Research of real-time video processing system based on 6678 multi-core DSP

    NASA Astrophysics Data System (ADS)

    Li, Xiangzhen; Xie, Xiaodan; Yin, Xiaoqiang

    2017-10-01

    In the information age, the rapid development in the direction of intelligent video processing, complex algorithm proposed the powerful challenge on the performance of the processor. In this article, through the FPGA + TMS320C6678 frame structure, the image to fog, merge into an organic whole, to stabilize the image enhancement, its good real-time, superior performance, break through the traditional function of video processing system is simple, the product defects such as single, solved the video application in security monitoring, video, etc. Can give full play to the video monitoring effectiveness, improve enterprise economic benefits.

  3. Towards a Video Passive Content Fingerprinting Method for Partial-Copy Detection Robust against Non-Simulated Attacks

    PubMed Central

    2016-01-01

    Passive content fingerprinting is widely used for video content identification and monitoring. However, many challenges remain unsolved especially for partial-copies detection. The main challenge is to find the right balance between the computational cost of fingerprint extraction and fingerprint dimension, without compromising detection performance against various attacks (robustness). Fast video detection performance is desirable in several modern applications, for instance, in those where video detection involves the use of large video databases or in applications requiring real-time video detection of partial copies, a process whose difficulty increases when videos suffer severe transformations. In this context, conventional fingerprinting methods are not fully suitable to cope with the attacks and transformations mentioned before, either because the robustness of these methods is not enough or because their execution time is very high, where the time bottleneck is commonly found in the fingerprint extraction and matching operations. Motivated by these issues, in this work we propose a content fingerprinting method based on the extraction of a set of independent binary global and local fingerprints. Although these features are robust against common video transformations, their combination is more discriminant against severe video transformations such as signal processing attacks, geometric transformations and temporal and spatial desynchronization. Additionally, we use an efficient multilevel filtering system accelerating the processes of fingerprint extraction and matching. This multilevel filtering system helps to rapidly identify potential similar video copies upon which the fingerprint process is carried out only, thus saving computational time. We tested with datasets of real copied videos, and the results show how our method outperforms state-of-the-art methods regarding detection scores. Furthermore, the granularity of our method makes it suitable for partial-copy detection; that is, by processing only short segments of 1 second length. PMID:27861492

  4. Using Digital Time-Lapse Videos to Teach Geomorphic Processes to Undergraduates

    NASA Astrophysics Data System (ADS)

    Clark, D. H.; Linneman, S. R.; Fuller, J.

    2004-12-01

    We demonstrate the use of relatively low-cost, computer-based digital imagery to create time-lapse videos of two distinct geomorphic processes in order to help students grasp the significance of the rates, styles, and temporal dependence of geologic phenomena. Student interviews indicate that such videos help them to understand the relationship between processes and landform development. Time-lapse videos have been used extensively in some sciences (e.g., biology - http://sbcf.iu.edu/goodpract/hangarter.html, meteorology - http://www.apple.com/education/hed/aua0101s/meteor/, chemistry - http://www.chem.yorku.ca/profs/hempsted/chemed/home.html) to demonstrate gradual processes that are difficult for many students to visualize. Most geologic processes are slower still, and are consequently even more difficult for students to grasp, yet time-lapse videos are rarely used in earth science classrooms. The advent of inexpensive web-cams and computers provides a new means to explore the temporal dimension of earth surface processes. To test the use of time-lapse videos in geoscience education, we are developing time-lapse movies that record the evolution of two landforms: a stream-table delta and a large, natural, active landslide. The former involves well-known processes in a controlled, repeatable laboratory experiment, whereas the latter tracks the developing dynamics of an otherwise poorly understood slope failure. The stream-table delta is small and grows in ca. 2 days; we capture a frame on an overhead web-cam every 3 minutes. Before seeing the video, students are asked to hypothesize how the delta will grow through time. The final time-lapse video, ca. 20-80 MB, elegantly shows channel migration, progradation rates, and formation of major geomorphic elements (topset, foreset, bottomset beds). The web-cam can also be "zoomed-in" to show smaller-scale processes, such as bedload transfer, and foreset slumping. Post-lab tests and interviews with students indicate that these time-lapse videos significantly improve student interest in the material, and comprehension of the processes. In contrast, the natural landslide is relatively unconstrained, and its processes of movement, both gradual and catastrophic, are essentially impossible to observe directly without the aid of time-lapse imagery. We are constructing a remote digital camera, mounted in a tree, which will capture 1-2 photos/day of the toe. The toe is extremely active geomorphically, and the time-lapse movie should help us (and the students) to constrain the style, frequency, and rates of movement, surface slumping, and debris-flow generation. Because we have also installed a remote weather station on the landslide, we will be able to test the links between these processes and local climate conditions.

  5. The design of red-blue 3D video fusion system based on DM642

    NASA Astrophysics Data System (ADS)

    Fu, Rongguo; Luo, Hao; Lv, Jin; Feng, Shu; Wei, Yifang; Zhang, Hao

    2016-10-01

    Aiming at the uncertainty of traditional 3D video capturing including camera focal lengths, distance and angle parameters between two cameras, a red-blue 3D video fusion system based on DM642 hardware processing platform is designed with the parallel optical axis. In view of the brightness reduction of traditional 3D video, the brightness enhancement algorithm based on human visual characteristics is proposed and the luminance component processing method based on YCbCr color space is also proposed. The BIOS real-time operating system is used to improve the real-time performance. The video processing circuit with the core of DM642 enhances the brightness of the images, then converts the video signals of YCbCr to RGB and extracts the R component from one camera, so does the other video and G, B component are extracted synchronously, outputs 3D fusion images finally. The real-time adjustments such as translation and scaling of the two color components are realized through the serial communication between the VC software and BIOS. The system with the method of adding red-blue components reduces the lost of the chrominance components and makes the picture color saturation reduce to more than 95% of the original. Enhancement algorithm after optimization to reduce the amount of data fusion in the processing of video is used to reduce the fusion time and watching effect is improved. Experimental results show that the system can capture images in near distance, output red-blue 3D video and presents the nice experiences to the audience wearing red-blue glasses.

  6. HEVC real-time decoding

    NASA Astrophysics Data System (ADS)

    Bross, Benjamin; Alvarez-Mesa, Mauricio; George, Valeri; Chi, Chi Ching; Mayer, Tobias; Juurlink, Ben; Schierl, Thomas

    2013-09-01

    The new High Efficiency Video Coding Standard (HEVC) was finalized in January 2013. Compared to its predecessor H.264 / MPEG4-AVC, this new international standard is able to reduce the bitrate by 50% for the same subjective video quality. This paper investigates decoder optimizations that are needed to achieve HEVC real-time software decoding on a mobile processor. It is shown that HEVC real-time decoding up to high definition video is feasible using instruction extensions of the processor while decoding 4K ultra high definition video in real-time requires additional parallel processing. For parallel processing, a picture-level parallel approach has been chosen because it is generic and does not require bitstreams with special indication.

  7. System on a chip with MPEG-4 capability

    NASA Astrophysics Data System (ADS)

    Yassa, Fathy; Schonfeld, Dan

    2002-12-01

    Current products supporting video communication applications rely on existing computer architectures. RISC processors have been used successfully in numerous applications over several decades. DSP processors have become ubiquitous in signal processing and communication applications. Real-time applications such as speech processing in cellular telephony rely extensively on the computational power of these processors. Video processors designed to implement the computationally intensive codec operations have also been used to address the high demands of video communication applications (e.g., cable set-top boxes and DVDs). This paper presents an overview of a system-on-chip (SOC) architecture used for real-time video in wireless communication applications. The SOC specifications answer to the system requirements imposed by the application environment. A CAM-based video processor is used to accelerate data intensive video compression tasks such as motion estimations and filtering. Other components are dedicated to system level data processing and audio processing. A rich set of I/Os allows the SOC to communicate with other system components such as baseband and memory subsystems.

  8. A complexity-scalable software-based MPEG-2 video encoder.

    PubMed

    Chen, Guo-bin; Lu, Xin-ning; Wang, Xing-guo; Liu, Ji-lin

    2004-05-01

    With the development of general-purpose processors (GPP) and video signal processing algorithms, it is possible to implement a software-based real-time video encoder on GPP, and its low cost and easy upgrade attract developers' interests to transfer video encoding from specialized hardware to more flexible software. In this paper, the encoding structure is set up first to support complexity scalability; then a lot of high performance algorithms are used on the key time-consuming modules in coding process; finally, at programming level, processor characteristics are considered to improve data access efficiency and processing parallelism. Other programming methods such as lookup table are adopted to reduce the computational complexity. Simulation results showed that these ideas could not only improve the global performance of video coding, but also provide great flexibility in complexity regulation.

  9. Objective assessment of MPEG-2 video quality

    NASA Astrophysics Data System (ADS)

    Gastaldo, Paolo; Zunino, Rodolfo; Rovetta, Stefano

    2002-07-01

    The increasing use of video compression standards in broadcasting television systems has required, in recent years, the development of video quality measurements that take into account artifacts specifically caused by digital compression techniques. In this paper we present a methodology for the objective quality assessment of MPEG video streams by using circular back-propagation feedforward neural networks. Mapping neural networks can render nonlinear relationships between objective features and subjective judgments, thus avoiding any simplifying assumption on the complexity of the model. The neural network processes an instantaneous set of input values, and yields an associated estimate of perceived quality. Therefore, the neural-network approach turns objective quality assessment into adaptive modeling of subjective perception. The objective features used for the estimate are chosen according to the assessed relevance to perceived quality and are continuously extracted in real time from compressed video streams. The overall system mimics perception but does not require any analytical model of the underlying physical phenomenon. The capability to process compressed video streams represents an important advantage over existing approaches, like avoiding the stream-decoding process greatly enhances real-time performance. Experimental results confirm that the system provides satisfactory, continuous-time approximations for actual scoring curves concerning real test videos.

  10. Writing/Thinking in Real Time: Digital Video and Corpus Query Analysis

    ERIC Educational Resources Information Center

    Park, Kwanghyun; Kinginger, Celeste

    2010-01-01

    The advance of digital video technology in the past two decades facilitates empirical investigation of learning in real time. The focus of this paper is the combined use of real-time digital video and a networked linguistic corpus for exploring the ways in which these technologies enhance our capability to investigate the cognitive process of…

  11. Real-time unmanned aircraft systems surveillance video mosaicking using GPU

    NASA Astrophysics Data System (ADS)

    Camargo, Aldo; Anderson, Kyle; Wang, Yi; Schultz, Richard R.; Fevig, Ronald A.

    2010-04-01

    Digital video mosaicking from Unmanned Aircraft Systems (UAS) is being used for many military and civilian applications, including surveillance, target recognition, border protection, forest fire monitoring, traffic control on highways, monitoring of transmission lines, among others. Additionally, NASA is using digital video mosaicking to explore the moon and planets such as Mars. In order to compute a "good" mosaic from video captured by a UAS, the algorithm must deal with motion blur, frame-to-frame jitter associated with an imperfectly stabilized platform, perspective changes as the camera tilts in flight, as well as a number of other factors. The most suitable algorithms use SIFT (Scale-Invariant Feature Transform) to detect the features consistent between video frames. Utilizing these features, the next step is to estimate the homography between two consecutives video frames, perform warping to properly register the image data, and finally blend the video frames resulting in a seamless video mosaick. All this processing takes a great deal of resources of resources from the CPU, so it is almost impossible to compute a real time video mosaic on a single processor. Modern graphics processing units (GPUs) offer computational performance that far exceeds current CPU technology, allowing for real-time operation. This paper presents the development of a GPU-accelerated digital video mosaicking implementation and compares it with CPU performance. Our tests are based on two sets of real video captured by a small UAS aircraft; one video comes from Infrared (IR) and Electro-Optical (EO) cameras. Our results show that we can obtain a speed-up of more than 50 times using GPU technology, so real-time operation at a video capture of 30 frames per second is feasible.

  12. Low-SWaP coincidence processing for Geiger-mode LIDAR video

    NASA Astrophysics Data System (ADS)

    Schultz, Steven E.; Cervino, Noel P.; Kurtz, Zachary D.; Brown, Myron Z.

    2015-05-01

    Photon-counting Geiger-mode lidar detector arrays provide a promising approach for producing three-dimensional (3D) video at full motion video (FMV) data rates, resolution, and image size from long ranges. However, coincidence processing required to filter raw photon counts is computationally expensive, generally requiring significant size, weight, and power (SWaP) and also time. In this paper, we describe a laboratory test-bed developed to assess the feasibility of low-SWaP, real-time processing for 3D FMV based on Geiger-mode lidar. First, we examine a design based on field programmable gate arrays (FPGA) and demonstrate proof-of-concept results. Then we examine a design based on a first-of-its-kind embedded graphical processing unit (GPU) and compare performance with the FPGA. Results indicate feasibility of real-time Geiger-mode lidar processing for 3D FMV and also suggest utility for real-time onboard processing for mapping lidar systems.

  13. Two-dimensional thermal video analysis of offshore bird and bat flight

    DOE PAGES

    Matzner, Shari; Cullinan, Valerie I.; Duberstein, Corey A.

    2015-09-11

    Thermal infrared video can provide essential information about bird and bat presence and activity for risk assessment studies, but the analysis of recorded video can be time-consuming and may not extract all of the available information. Automated processing makes continuous monitoring over extended periods of time feasible, and maximizes the information provided by video. This is especially important for collecting data in remote locations that are difficult for human observers to access, such as proposed offshore wind turbine sites. We present guidelines for selecting an appropriate thermal camera based on environmental conditions and the physical characteristics of the target animals.more » We developed new video image processing algorithms that automate the extraction of bird and bat flight tracks from thermal video, and that characterize the extracted tracks to support animal identification and behavior inference. The algorithms use a video peak store process followed by background masking and perceptual grouping to extract flight tracks. The extracted tracks are automatically quantified in terms that could then be used to infer animal type and possibly behavior. The developed automated processing generates results that are reproducible and verifiable, and reduces the total amount of video data that must be retained and reviewed by human experts. Finally, we suggest models for interpreting thermal imaging information.« less

  14. Two-dimensional thermal video analysis of offshore bird and bat flight

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzner, Shari; Cullinan, Valerie I.; Duberstein, Corey A.

    Thermal infrared video can provide essential information about bird and bat presence and activity for risk assessment studies, but the analysis of recorded video can be time-consuming and may not extract all of the available information. Automated processing makes continuous monitoring over extended periods of time feasible, and maximizes the information provided by video. This is especially important for collecting data in remote locations that are difficult for human observers to access, such as proposed offshore wind turbine sites. We present guidelines for selecting an appropriate thermal camera based on environmental conditions and the physical characteristics of the target animals.more » We developed new video image processing algorithms that automate the extraction of bird and bat flight tracks from thermal video, and that characterize the extracted tracks to support animal identification and behavior inference. The algorithms use a video peak store process followed by background masking and perceptual grouping to extract flight tracks. The extracted tracks are automatically quantified in terms that could then be used to infer animal type and possibly behavior. The developed automated processing generates results that are reproducible and verifiable, and reduces the total amount of video data that must be retained and reviewed by human experts. Finally, we suggest models for interpreting thermal imaging information.« less

  15. Incremental principal component pursuit for video background modeling

    DOEpatents

    Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt

    2017-03-14

    An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.

  16. Simulation and Real-Time Verification of Video Algorithms on the TI C6400 Using Simulink

    DTIC Science & Technology

    2004-08-20

    SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12 . DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release...plot estimates over time (scrolling data) Adjust detection threshold (click mouse on graph) Monitor video capture Input video frames Captured frames 12 ...Video App: Surveillance Recording 1 2 7 3 4 9 5 6 11 SL for video Explanation of GUI 12 Target Options8 Build Process 10 13 14 15 16 M-code snippet

  17. Video enhancement workbench: an operational real-time video image processing system

    NASA Astrophysics Data System (ADS)

    Yool, Stephen R.; Van Vactor, David L.; Smedley, Kirk G.

    1993-01-01

    Video image sequences can be exploited in real-time, giving analysts rapid access to information for military or criminal investigations. Video-rate dynamic range adjustment subdues fluctuations in image intensity, thereby assisting discrimination of small or low- contrast objects. Contrast-regulated unsharp masking enhances differentially shadowed or otherwise low-contrast image regions. Real-time removal of localized hotspots, when combined with automatic histogram equalization, may enhance resolution of objects directly adjacent. In video imagery corrupted by zero-mean noise, real-time frame averaging can assist resolution and location of small or low-contrast objects. To maximize analyst efficiency, lengthy video sequences can be screened automatically for low-frequency, high-magnitude events. Combined zoom, roam, and automatic dynamic range adjustment permit rapid analysis of facial features captured by video cameras recording crimes in progress. When trying to resolve small objects in murky seawater, stereo video places the moving imagery in an optimal setting for human interpretation.

  18. Towards real-time remote processing of laparoscopic video

    NASA Astrophysics Data System (ADS)

    Ronaghi, Zahra; Duffy, Edward B.; Kwartowitz, David M.

    2015-03-01

    Laparoscopic surgery is a minimally invasive surgical technique where surgeons insert a small video camera into the patient's body to visualize internal organs and small tools to perform surgical procedures. However, the benefit of small incisions has a drawback of limited visualization of subsurface tissues, which can lead to navigational challenges in the delivering of therapy. Image-guided surgery (IGS) uses images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic camera system of interest is the vision system of the daVinci-Si robotic surgical system (Intuitive Surgical, Sunnyvale, CA, USA). The video streams generate approximately 360 megabytes of data per second, demonstrating a trend towards increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Processing this data on a bedside PC has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second (fps) rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. The ability to acquire, process and visualize data in real-time is essential for performance of complex tasks as well as minimizing risk to the patient. As a result, utilizing high-speed networks to access computing clusters will lead to real-time medical image processing and improve surgical experiences by providing real-time augmented laparoscopic data. We aim to develop a medical video processing system using an OpenFlow software defined network that is capable of connecting to multiple remote medical facilities and HPC servers.

  19. Digital CODEC for real-time processing of broadcast quality video signals at 1.8 bits/pixel

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1989-01-01

    Advances in very large-scale integration and recent work in the field of bandwidth efficient digital modulation techniques have combined to make digital video processing technically feasible and potentially cost competitive for broadcast quality television transmission. A hardware implementation was developed for a DPCM-based digital television bandwidth compression algorithm which processes standard NTSC composite color television signals and produces broadcast quality video in real time at an average of 1.8 bits/pixel. The data compression algorithm and the hardware implementation of the CODEC are described, and performance results are provided.

  20. Digital CODEC for real-time processing of broadcast quality video signals at 1.8 bits/pixel

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary JO; Whyte, Wayne A.

    1991-01-01

    Advances in very large scale integration and recent work in the field of bandwidth efficient digital modulation techniques have combined to make digital video processing technically feasible an potentially cost competitive for broadcast quality television transmission. A hardware implementation was developed for DPCM (differential pulse code midulation)-based digital television bandwidth compression algorithm which processes standard NTSC composite color television signals and produces broadcast quality video in real time at an average of 1.8 bits/pixel. The data compression algorithm and the hardware implementation of the codec are described, and performance results are provided.

  1. Exploring inter-frame correlation analysis and wavelet-domain modeling for real-time caption detection in streaming video

    NASA Astrophysics Data System (ADS)

    Li, Jia; Tian, Yonghong; Gao, Wen

    2008-01-01

    In recent years, the amount of streaming video has grown rapidly on the Web. Often, retrieving these streaming videos offers the challenge of indexing and analyzing the media in real time because the streams must be treated as effectively infinite in length, thus precluding offline processing. Generally speaking, captions are important semantic clues for video indexing and retrieval. However, existing caption detection methods often have difficulties to make real-time detection for streaming video, and few of them concern on the differentiation of captions from scene texts and scrolling texts. In general, these texts have different roles in streaming video retrieval. To overcome these difficulties, this paper proposes a novel approach which explores the inter-frame correlation analysis and wavelet-domain modeling for real-time caption detection in streaming video. In our approach, the inter-frame correlation information is used to distinguish caption texts from scene texts and scrolling texts. Moreover, wavelet-domain Generalized Gaussian Models (GGMs) are utilized to automatically remove non-text regions from each frame and only keep caption regions for further processing. Experiment results show that our approach is able to offer real-time caption detection with high recall and low false alarm rate, and also can effectively discern caption texts from the other texts even in low resolutions.

  2. Real-time CT-video registration for continuous endoscopic guidance

    NASA Astrophysics Data System (ADS)

    Merritt, Scott A.; Rai, Lav; Higgins, William E.

    2006-03-01

    Previous research has shown that CT-image-based guidance could be useful for the bronchoscopic assessment of lung cancer. This research drew upon the registration of bronchoscopic video images to CT-based endoluminal renderings of the airway tree. The proposed methods either were restricted to discrete single-frame registration, which took several seconds to complete, or required non-real-time buffering and processing of video sequences. We have devised a fast 2D/3D image registration method that performs single-frame CT-Video registration in under 1/15th of a second. This allows the method to be used for real-time registration at full video frame rates without significantly altering the physician's behavior. The method achieves its speed through a gradient-based optimization method that allows most of the computation to be performed off-line. During live registration, the optimization iteratively steps toward the locally optimal viewpoint at which a CT-based endoluminal view is most similar to a current bronchoscopic video frame. After an initial registration to begin the process (generally done in the trachea for bronchoscopy), subsequent registrations are performed in real-time on each incoming video frame. As each new bronchoscopic video frame becomes available, the current optimization is initialized using the previous frame's optimization result, allowing continuous guidance to proceed without manual re-initialization. Tests were performed using both synthetic and pre-recorded bronchoscopic video. The results show that the method is robust to initialization errors, that registration accuracy is high, and that continuous registration can proceed on real-time video at >15 frames per sec. with minimal user-intervention.

  3. Researching on the process of remote sensing video imagery

    NASA Astrophysics Data System (ADS)

    Wang, He-rao; Zheng, Xin-qi; Sun, Yi-bo; Jia, Zong-ren; Wang, He-zhan

    Unmanned air vehicle remotely-sensed imagery on the low-altitude has the advantages of higher revolution, easy-shooting, real-time accessing, etc. It's been widely used in mapping , target identification, and other fields in recent years. However, because of conditional limitation, the video images are unstable, the targets move fast, and the shooting background is complex, etc., thus it is difficult to process the video images in this situation. In other fields, especially in the field of computer vision, the researches on video images are more extensive., which is very helpful for processing the remotely-sensed imagery on the low-altitude. Based on this, this paper analyzes and summarizes amounts of video image processing achievement in different fields, including research purposes, data sources, and the pros and cons of technology. Meantime, this paper explores the technology methods more suitable for low-altitude video image processing of remote sensing.

  4. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    PubMed Central

    Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu

    2016-01-01

    Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127

  5. A Real-Time Image Acquisition And Processing System For A RISC-Based Microcomputer

    NASA Astrophysics Data System (ADS)

    Luckman, Adrian J.; Allinson, Nigel M.

    1989-03-01

    A low cost image acquisition and processing system has been developed for the Acorn Archimedes microcomputer. Using a Reduced Instruction Set Computer (RISC) architecture, the ARM (Acorn Risc Machine) processor provides instruction speeds suitable for image processing applications. The associated improvement in data transfer rate has allowed real-time video image acquisition without the need for frame-store memory external to the microcomputer. The system is comprised of real-time video digitising hardware which interfaces directly to the Archimedes memory, and software to provide an integrated image acquisition and processing environment. The hardware can digitise a video signal at up to 640 samples per video line with programmable parameters such as sampling rate and gain. Software support includes a work environment for image capture and processing with pixel, neighbourhood and global operators. A friendly user interface is provided with the help of the Archimedes Operating System WIMP (Windows, Icons, Mouse and Pointer) Manager. Windows provide a convenient way of handling images on the screen and program control is directed mostly by pop-up menus.

  6. Using dynamic mode decomposition for real-time background/foreground separation in video

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kutz, Jose Nathan; Grosek, Jacob; Brunton, Steven

    The technique of dynamic mode decomposition (DMD) is disclosed herein for the purpose of robustly separating video frames into background (low-rank) and foreground (sparse) components in real-time. Foreground/background separation is achieved at the computational cost of just one singular value decomposition (SVD) and one linear equation solve, thus producing results orders of magnitude faster than robust principal component analysis (RPCA). Additional techniques, including techniques for analyzing the video for multi-resolution time-scale components, and techniques for reusing computations to allow processing of streaming video in real time, are also described herein.

  7. Nonchronological video synopsis and indexing.

    PubMed

    Pritch, Yael; Rav-Acha, Alex; Peleg, Shmuel

    2008-11-01

    The amount of captured video is growing with the increased numbers of video cameras, especially the increase of millions of surveillance cameras that operate 24 hours a day. Since video browsing and retrieval is time consuming, most captured video is never watched or examined. Video synopsis is an effective tool for browsing and indexing of such a video. It provides a short video representation, while preserving the essential activities of the original video. The activity in the video is condensed into a shorter period by simultaneously showing multiple activities, even when they originally occurred at different times. The synopsis video is also an index into the original video by pointing to the original time of each activity. Video Synopsis can be applied to create a synopsis of an endless video streams, as generated by webcams and by surveillance cameras. It can address queries like "Show in one minute the synopsis of this camera broadcast during the past day''. This process includes two major phases: (i) An online conversion of the endless video stream into a database of objects and activities (rather than frames). (ii) A response phase, generating the video synopsis as a response to the user's query.

  8. Design of video interface conversion system based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhao, Heng; Wang, Xiang-jun

    2014-11-01

    This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.

  9. A design of real time image capturing and processing system using Texas Instrument's processor

    NASA Astrophysics Data System (ADS)

    Wee, Toon-Joo; Chaisorn, Lekha; Rahardja, Susanto; Gan, Woon-Seng

    2007-09-01

    In this work, we developed and implemented an image capturing and processing system that equipped with capability of capturing images from an input video in real time. The input video can be a video from a PC, video camcorder or DVD player. We developed two modes of operation in the system. In the first mode, an input image from the PC is processed on the processing board (development platform with a digital signal processor) and is displayed on the PC. In the second mode, current captured image from the video camcorder (or from DVD player) is processed on the board but is displayed on the LCD monitor. The major difference between our system and other existing conventional systems is that image-processing functions are performed on the board instead of the PC (so that the functions can be used for further developments on the board). The user can control the operations of the board through the Graphic User Interface (GUI) provided on the PC. In order to have a smooth image data transfer between the PC and the board, we employed Real Time Data Transfer (RTDX TM) technology to create a link between them. For image processing functions, we developed three main groups of function: (1) Point Processing; (2) Filtering and; (3) 'Others'. Point Processing includes rotation, negation and mirroring. Filter category provides median, adaptive, smooth and sharpen filtering in the time domain. In 'Others' category, auto-contrast adjustment, edge detection, segmentation and sepia color are provided, these functions either add effect on the image or enhance the image. We have developed and implemented our system using C/C# programming language on TMS320DM642 (or DM642) board from Texas Instruments (TI). The system was showcased in College of Engineering (CoE) exhibition 2006 at Nanyang Technological University (NTU) and have more than 40 users tried our system. It is demonstrated that our system is adequate for real time image capturing. Our system can be used or applied for applications such as medical imaging, video surveillance, etc.

  10. Vehicle counting system using real-time video processing

    NASA Astrophysics Data System (ADS)

    Crisóstomo-Romero, Pedro M.

    2006-02-01

    Transit studies are important for planning a road network with optimal vehicular flow. A vehicular count is essential. This article presents a vehicle counting system based on video processing. An advantage of such system is the greater detail than is possible to obtain, like shape, size and speed of vehicles. The system uses a video camera placed above the street to image transit in real-time. The video camera must be placed at least 6 meters above the street level to achieve proper acquisition quality. Fast image processing algorithms and small image dimensions are used to allow real-time processing. Digital filters, mathematical morphology, segmentation and other techniques allow identifying and counting all vehicles in the image sequences. The system was implemented under Linux in a 1.8 GHz Pentium 4 computer. A successful count was obtained with frame rates of 15 frames per second for images of size 240x180 pixels and 24 frames per second for images of size 180x120 pixels, thus being able to count vehicles whose speeds do not exceed 150 km/h.

  11. The impact of video technology on learning: A cooking skills experiment.

    PubMed

    Surgenor, Dawn; Hollywood, Lynsey; Furey, Sinéad; Lavelle, Fiona; McGowan, Laura; Spence, Michelle; Raats, Monique; McCloat, Amanda; Mooney, Elaine; Caraher, Martin; Dean, Moira

    2017-07-01

    This study examines the role of video technology in the development of cooking skills. The study explored the views of 141 female participants on whether video technology can promote confidence in learning new cooking skills to assist in meal preparation. Prior to each focus group participants took part in a cooking experiment to assess the most effective method of learning for low-skilled cooks across four experimental conditions (recipe card only; recipe card plus video demonstration; recipe card plus video demonstration conducted in segmented stages; and recipe card plus video demonstration whereby participants freely accessed video demonstrations as and when needed). Focus group findings revealed that video technology was perceived to assist learning in the cooking process in the following ways: (1) improved comprehension of the cooking process; (2) real-time reassurance in the cooking process; (3) assisting the acquisition of new cooking skills; and (4) enhancing the enjoyment of the cooking process. These findings display the potential for video technology to promote motivation and confidence as well as enhancing cooking skills among low-skilled individuals wishing to cook from scratch using fresh ingredients. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Real time mitigation of atmospheric turbulence in long distance imaging using the lucky region fusion algorithm with FPGA and GPU hardware acceleration

    NASA Astrophysics Data System (ADS)

    Jackson, Christopher Robert

    "Lucky-region" fusion (LRF) is a synthetic imaging technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm selects sharp regions of an image obtained from a series of short exposure frames, and fuses the sharp regions into a final, improved image. In previous research, the LRF algorithm had been implemented on a PC using the C programming language. However, the PC did not have sufficient sequential processing power to handle real-time extraction, processing and reduction required when the LRF algorithm was applied to real-time video from fast, high-resolution image sensors. This thesis describes two hardware implementations of the LRF algorithm to achieve real-time image processing. The first was created with a VIRTEX-7 field programmable gate array (FPGA). The other developed using the graphics processing unit (GPU) of a NVIDIA GeForce GTX 690 video card. The novelty in the FPGA approach is the creation of a "black box" LRF video processing system with a general camera link input, a user controller interface, and a camera link video output. We also describe a custom hardware simulation environment we have built to test the FPGA LRF implementation. The advantage of the GPU approach is significantly improved development time, integration of image stabilization into the system, and comparable atmospheric turbulence mitigation.

  13. Design and implementation of H.264 based embedded video coding technology

    NASA Astrophysics Data System (ADS)

    Mao, Jian; Liu, Jinming; Zhang, Jiemin

    2016-03-01

    In this paper, an embedded system for remote online video monitoring was designed and developed to capture and record the real-time circumstances in elevator. For the purpose of improving the efficiency of video acquisition and processing, the system selected Samsung S5PV210 chip as the core processor which Integrated graphics processing unit. And the video was encoded with H.264 format for storage and transmission efficiently. Based on S5PV210 chip, the hardware video coding technology was researched, which was more efficient than software coding. After running test, it had been proved that the hardware video coding technology could obviously reduce the cost of system and obtain the more smooth video display. It can be widely applied for the security supervision [1].

  14. Real-time video compressing under DSP/BIOS

    NASA Astrophysics Data System (ADS)

    Chen, Qiu-ping; Li, Gui-ju

    2009-10-01

    This paper presents real-time MPEG-4 Simple Profile video compressing based on the DSP processor. The programming framework of video compressing is constructed using TMS320C6416 Microprocessor, TDS510 simulator and PC. It uses embedded real-time operating system DSP/BIOS and the API functions to build periodic function, tasks and interruptions etcs. Realize real-time video compressing. To the questions of data transferring among the system. Based on the architecture of the C64x DSP, utilized double buffer switched and EDMA data transfer controller to transit data from external memory to internal, and realize data transition and processing at the same time; the architecture level optimizations are used to improve software pipeline. The system used DSP/BIOS to realize multi-thread scheduling. The whole system realizes high speed transition of a great deal of data. Experimental results show the encoder can realize real-time encoding of 768*576, 25 frame/s video images.

  15. SSME propellant path leak detection real-time

    NASA Technical Reports Server (NTRS)

    Crawford, R. A.; Smith, L. M.

    1994-01-01

    Included are four documents that outline the technical aspects of the research performed on NASA Grant NAG8-140: 'A System for Sequential Step Detection with Application to Video Image Processing'; 'Leak Detection from the SSME Using Sequential Image Processing'; 'Digital Image Processor Specifications for Real-Time SSME Leak Detection'; and 'A Color Change Detection System for Video Signals with Applications to Spectral Analysis of Rocket Engine Plumes'.

  16. Digital codec for real-time processing of broadcast quality video signals at 1.8 bits/pixel

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1989-01-01

    The authors present the hardware implementation of a digital television bandwidth compression algorithm which processes standard NTSC (National Television Systems Committee) composite color television signals and produces broadcast-quality video in real time at an average of 1.8 b/pixel. The sampling rate used with this algorithm results in 768 samples over the active portion of each video line by 512 active video lines per video frame. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a nonadaptive predictor, nonuniform quantizer, and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The nonadaptive predictor and multilevel Huffman coder combine to set this technique apart from prior-art DPCM encoding algorithms. The authors describe the data compression algorithm and the hardware implementation of the codec and provide performance results.

  17. Ubiquitous UAVs: a cloud based framework for storing, accessing and processing huge amount of video footage in an efficient way

    NASA Astrophysics Data System (ADS)

    Efstathiou, Nectarios; Skitsas, Michael; Psaroudakis, Chrysostomos; Koutras, Nikolaos

    2017-09-01

    Nowadays, video surveillance cameras are used for the protection and monitoring of a huge number of facilities worldwide. An important element in such surveillance systems is the use of aerial video streams originating from onboard sensors located on Unmanned Aerial Vehicles (UAVs). Video surveillance using UAVs represent a vast amount of video to be transmitted, stored, analyzed and visualized in a real-time way. As a result, the introduction and development of systems able to handle huge amount of data become a necessity. In this paper, a new approach for the collection, transmission and storage of aerial videos and metadata is introduced. The objective of this work is twofold. First, the integration of the appropriate equipment in order to capture and transmit real-time video including metadata (i.e. position coordinates, target) from the UAV to the ground and, second, the utilization of the ADITESS Versatile Media Content Management System (VMCMS-GE) for storing of the video stream and the appropriate metadata. Beyond the storage, VMCMS-GE provides other efficient management capabilities such as searching and processing of videos, along with video transcoding. For the evaluation and demonstration of the proposed framework we execute a use case where the surveillance of critical infrastructure and the detection of suspicious activities is performed. Collected video Transcodingis subject of this evaluation as well.

  18. An efficient interpolation filter VLSI architecture for HEVC standard

    NASA Astrophysics Data System (ADS)

    Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang

    2015-12-01

    The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.

  19. Task-technology fit of video telehealth for nurses in an outpatient clinic setting.

    PubMed

    Cady, Rhonda G; Finkelstein, Stanley M

    2014-07-01

    Incorporating telehealth into outpatient care delivery supports management of consumer health between clinic visits. Task-technology fit is a framework for understanding how technology helps and/or hinders a person during work processes. Evaluating the task-technology fit of video telehealth for personnel working in a pediatric outpatient clinic and providing care between clinic visits ensures the information provided matches the information needed to support work processes. The workflow of advanced practice registered nurse (APRN) care coordination provided via telephone and video telehealth was described and measured using a mixed-methods workflow analysis protocol that incorporated cognitive ethnography and time-motion study. Qualitative and quantitative results were merged and analyzed within the task-technology fit framework to determine the workflow fit of video telehealth for APRN care coordination. Incorporating video telehealth into APRN care coordination workflow provided visual information unavailable during telephone interactions. Despite additional tasks and interactions needed to obtain the visual information, APRN workflow efficiency, as measured by time, was not significantly changed. Analyzed within the task-technology fit framework, the increased visual information afforded by video telehealth supported the assessment and diagnostic information needs of the APRN. Telehealth must provide the right information to the right clinician at the right time. Evaluating task-technology fit using a mixed-methods protocol ensured rigorous analysis of fit within work processes and identified workflows that benefit most from the technology.

  20. A flexible software architecture for scalable real-time image and video processing applications

    NASA Astrophysics Data System (ADS)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2012-06-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility because they are normally oriented towards particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse and inefficient execution on multicore processors. This paper presents a novel software architecture for real-time image and video processing applications which addresses these issues. The architecture is divided into three layers: the platform abstraction layer, the messaging layer, and the application layer. The platform abstraction layer provides a high level application programming interface for the rest of the architecture. The messaging layer provides a message passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of messages. The application layer provides a repository for reusable application modules designed for real-time image and video processing applications. These modules, which include acquisition, visualization, communication, user interface and data processing modules, take advantage of the power of other well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, we present different prototypes and applications to show the possibilities of the proposed architecture.

  1. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalisedcross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  2. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalised cross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  3. Robust real-time horizon detection in full-motion video

    NASA Astrophysics Data System (ADS)

    Young, Grace B.; Bagnall, Bryan; Lane, Corey; Parameswaran, Shibin

    2014-06-01

    The ability to detect the horizon on a real-time basis in full-motion video is an important capability to aid and facilitate real-time processing of full-motion videos for the purposes such as object detection, recognition and other video/image segmentation applications. In this paper, we propose a method for real-time horizon detection that is designed to be used as a front-end processing unit for a real-time marine object detection system that carries out object detection and tracking on full-motion videos captured by ship/harbor-mounted cameras, Unmanned Aerial Vehicles (UAVs) or any other method of surveillance for Maritime Domain Awareness (MDA). Unlike existing horizon detection work, we cannot assume a priori the angle or nature (for e.g. straight line) of the horizon, due to the nature of the application domain and the data. Therefore, the proposed real-time algorithm is designed to identify the horizon at any angle and irrespective of objects appearing close to and/or occluding the horizon line (for e.g. trees, vehicles at a distance) by accounting for its non-linear nature. We use a simple two-stage hierarchical methodology, leveraging color-based features, to quickly isolate the region of the image containing the horizon and then perform a more ne-grained horizon detection operation. In this paper, we present our real-time horizon detection results using our algorithm on real-world full-motion video data from a variety of surveillance sensors like UAVs and ship mounted cameras con rming the real-time applicability of this method and its ability to detect horizon with no a priori assumptions.

  4. Variable disparity-motion estimation based fast three-view video coding

    NASA Astrophysics Data System (ADS)

    Bae, Kyung-Hoon; Kim, Seung-Cheol; Hwang, Yong Seok; Kim, Eun-Soo

    2009-02-01

    In this paper, variable disparity-motion estimation (VDME) based 3-view video coding is proposed. In the encoding, key-frame coding (KFC) based motion estimation and variable disparity estimation (VDE) for effectively fast three-view video encoding are processed. These proposed algorithms enhance the performance of 3-D video encoding/decoding system in terms of accuracy of disparity estimation and computational overhead. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm's PSNRs is 37.66 and 40.55 dB, and the processing time is 0.139 and 0.124 sec/frame, respectively.

  5. Obstacles encountered in the development of the low vision enhancement system.

    PubMed

    Massof, R W; Rickman, D L

    1992-01-01

    The Johns Hopkins Wilmer Eye Institute and the NASA Stennis Space Center are collaborating on the development of a new high technology low vision aid called the Low Vision Enhancement System (LVES). The LVES consists of a binocular head-mounted video display system, video cameras mounted on the head-mounted display, and real-time video image processing in a system package that is battery powered and portable. Through a phased development approach, several generations of the LVES can be made available to the patient in a timely fashion. This paper describes the LVES project with major emphasis on technical problems encountered or anticipated during the development process.

  6. Using Image Analysis to Explore Changes In Bacterial Mat Coverage at the Base of a Hydrothermal Vent within the Caldera of Axial Seamount

    NASA Astrophysics Data System (ADS)

    Knuth, F.; Crone, T. J.; Marburg, A.

    2017-12-01

    The Ocean Observatories Initiative's (OOI) Cabled Array is delivering real-time high-definition video data from an HD video camera (CAMHD), installed at the Mushroom hydrothermal vent in the ASHES hydrothermal vent field within the caldera of Axial Seamount, an active submarine volcano located approximately 450 kilometers off the coast of Washington at a depth of 1,542 m. Every three hours the camera pans, zooms and focuses in on nine distinct scenes of scientific interest across the vent, producing 14-minute-long videos during each run. This standardized video sampling routine enables scientists to programmatically analyze the content of the video using automated image analysis techniques. Each scene-specific time series dataset can service a wide range of scientific investigations, including the estimation of bacterial flux into the system by quantifying chemosynthetic bacterial clusters (floc) present in the water column, relating periodicity in hydrothermal vent fluid flow to earth tides, measuring vent chimney growth in response to changing hydrothermal fluid flow rates, or mapping the patterns of fauna colonization, distribution and composition across the vent over time. We are currently investigating the seventh scene in the sampling routine, focused on the bacterial mat covering the seafloor at the base of the vent. We quantify the change in bacterial mat coverage over time using image analysis techniques, and examine the relationship between mat coverage, fluid flow processes, episodic chimney collapse events, and other processes observed by Cabled Array instrumentation. This analysis is being conducted using cloud-enabled computer vision processing techniques, programmatic image analysis, and time-lapse video data collected over the course of the first CAMHD deployment, from November 2015 to July 2016.

  7. Compression Algorithm Analysis of In-Situ (S)TEM Video: Towards Automatic Event Detection and Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teuton, Jeremy R.; Griswold, Richard L.; Mehdi, Beata L.

    Precise analysis of both (S)TEM images and video are time and labor intensive processes. As an example, determining when crystal growth and shrinkage occurs during the dynamic process of Li dendrite deposition and stripping involves manually scanning through each frame in the video to extract a specific set of frames/images. For large numbers of images, this process can be very time consuming, so a fast and accurate automated method is desirable. Given this need, we developed software that uses analysis of video compression statistics for detecting and characterizing events in large data sets. This software works by converting the datamore » into a series of images which it compresses into an MPEG-2 video using the open source “avconv” utility [1]. The software does not use the video itself, but rather analyzes the video statistics from the first pass of the video encoding that avconv records in the log file. This file contains statistics for each frame of the video including the frame quality, intra-texture and predicted texture bits, forward and backward motion vector resolution, among others. In all, avconv records 15 statistics for each frame. By combining different statistics, we have been able to detect events in various types of data. We have developed an interactive tool for exploring the data and the statistics that aids the analyst in selecting useful statistics for each analysis. Going forward, an algorithm for detecting and possibly describing events automatically can be written based on statistic(s) for each data type.« less

  8. Adaptive compressed sensing of multi-view videos based on the sparsity estimation

    NASA Astrophysics Data System (ADS)

    Yang, Senlin; Li, Xilong; Chong, Xin

    2017-11-01

    The conventional compressive sensing for videos based on the non-adaptive linear projections, and the measurement times is usually set empirically. As a result, the quality of videos reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was described. Then an estimation method for the sparsity of multi-view videos was proposed based on the two dimensional discrete wavelet transform (2D DWT). With an energy threshold given beforehand, the DWT coefficients were processed with both energy normalization and sorting by descending order, and the sparsity of the multi-view video can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of video frame effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparsity estimated with the energy threshold provided, the proposed method can ensure the reconstruction quality of multi-view videos.

  9. 20 CFR 404.936 - Time and place for a hearing before an administrative law judge.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Time and place for a hearing before an...-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Determinations, Administrative Review Process, and... video teleconferencing if video teleconferencing technology is available to conduct the appearance, use...

  10. Advanced Video Analysis Needs for Human Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Campbell, Paul D.

    1994-01-01

    Evaluators of human task performance in space missions make use of video as a primary source of data. Extraction of relevant human performance information from video is often a labor-intensive process requiring a large amount of time on the part of the evaluator. Based on the experiences of several human performance evaluators, needs were defined for advanced tools which could aid in the analysis of video data from space missions. Such tools should increase the efficiency with which useful information is retrieved from large quantities of raw video. They should also provide the evaluator with new analytical functions which are not present in currently used methods. Video analysis tools based on the needs defined by this study would also have uses in U.S. industry and education. Evaluation of human performance from video data can be a valuable technique in many industrial and institutional settings where humans are involved in operational systems and processes.

  11. Replacing Non-Active Video Gaming by Active Video Gaming to Prevent Excessive Weight Gain in Adolescents.

    PubMed

    Simons, Monique; Brug, Johannes; Chinapaw, Mai J M; de Boer, Michiel; Seidell, Jaap; de Vet, Emely

    2015-01-01

    The aim of the current study was to evaluate the effects of and adherence to an active video game promotion intervention on anthropometrics, sedentary screen time and consumption of sugar-sweetened beverages and snacks among non-active video gaming adolescents who primarily were of healthy weight. We assigned 270 gaming (i.e. ≥ 2 hours/week non-active video game time) adolescents randomly to an intervention group (n = 140) (receiving active video games and encouragement to play) or a waiting-list control group (n = 130). BMI-SDS (SDS = adjusted for mean standard deviation score), waist circumference-SDS, hip circumference and sum of skinfolds were measured at baseline, at four and ten months follow-up (primary outcomes). Sedentary screen time, physical activity, consumption of sugar-sweetened beverages and snacks, and process measures (not at baseline) were assessed with self-reports at baseline, one, four and ten months follow-up. Multi-level-intention to treat-regression analyses were conducted. The control group decreased significantly more than the intervention group on BMI-SDS (β = 0.074, 95%CI: 0.008;0.14), and sum of skinfolds (β = 3.22, 95%CI: 0.27;6.17) (overall effects). The intervention group had a significantly higher decrease in self-reported non-active video game time (β = -1.76, 95%CI: -3.20;-0.32) and total sedentary screen time (Exp (β = 0.81, 95%CI: 0.74;0.88) than the control group (overall effects). The process evaluation showed that 14% of the adolescents played the Move video games every week ≥ 1 hour/week during the whole intervention period. The active video game intervention did not result in lower values on anthropometrics in a group of 'excessive' non-active video gamers (mean ~ 14 hours/week) who primarily were of healthy weight compared to a control group throughout a ten-month-period. Even some effects in the unexpected direction were found, with the control group showing lower BMI-SDS and skin folds than the intervention group. The intervention did result in less self-reported sedentary screen time, although these results are likely biased by social desirability. Dutch Trial Register NTR3228.

  12. Time-lapse videos for physics education: specific examples

    NASA Astrophysics Data System (ADS)

    Vollmer, Michael; Möllmann, Klaus-Peter

    2018-05-01

    There are many physics experiments with long time scales such that they are usually neither shown in the physics class room nor in student labs. However, they can be easily recorded with time-lapse cameras and the respective time-lapse videos allow qualitative and/or quantitative analysis of the underlying physics. Here, we present some examples from thermal physics (melting, evaporation, cooling) as well as diffusion processes

  13. Process for producing laser-formed video calibration markers.

    PubMed

    Franck, J B; Keller, P N; Swing, R A; Silberberg, G G

    1983-08-15

    A process for producing calibration markers directly on the photoconductive surface of video camera tubes has been developed. This process includes the use of a Nd:YAG laser operating at 1.06 microm with a 9.5-nsec pulse width (full width at half-maximum). The laser was constrained to operate in the TEM(00) spatial mode by intracavity aperturing. The use of this technology has produced an increase of up to 50 times the accuracy of geometric measurement. This is accomplished by a decrease in geometric distortion and an increase in geometric scaling. The process by which these laser-formed video calibrations are made will be discussed.

  14. Seeing Change in Time: Video Games to Teach about Temporal Change in Scientific Phenomena

    NASA Astrophysics Data System (ADS)

    Corredor, Javier; Gaydos, Matthew; Squire, Kurt

    2014-06-01

    This article explores how learning biological concepts can be facilitated by playing a video game that depicts interactions and processes at the subcellular level. Particularly, this article reviews the effects of a real-time strategy game that requires players to control the behavior of a virus and interact with cell structures in a way that resembles the actual behavior of biological agents. The evaluation of the video game presented here aims at showing that video games have representational advantages that facilitate the construction of dynamic mental models. Ultimately, the article shows that when video game's characteristics come in contact with expert knowledge during game design, the game becomes an excellent medium for supporting the learning of disciplinary content related to dynamic processes. In particular, results show that students who participated in a game-based intervention aimed at teaching biology described a higher number of temporal-dependent interactions as measured by the coding of verbal protocols and drawings than students who used texts and diagrams to learn the same topic.

  15. Video image processor on the Spacelab 2 Solar Optical Universal Polarimeter /SL2 SOUP/

    NASA Technical Reports Server (NTRS)

    Lindgren, R. W.; Tarbell, T. D.

    1981-01-01

    The SOUP instrument is designed to obtain diffraction-limited digital images of the sun with high photometric accuracy. The Video Processor originated from the requirement to provide onboard real-time image processing, both to reduce the telemetry rate and to provide meaningful video displays of scientific data to the payload crew. This original concept has evolved into a versatile digital processing system with a multitude of other uses in the SOUP program. The central element in the Video Processor design is a 16-bit central processing unit based on 2900 family bipolar bit-slice devices. All arithmetic, logical and I/O operations are under control of microprograms, stored in programmable read-only memory and initiated by commands from the LSI-11. Several functions of the Video Processor are described, including interface to the High Rate Multiplexer downlink, cosmetic and scientific data processing, scan conversion for crew displays, focus and exposure testing, and use as ground support equipment.

  16. Characterization, adaptive traffic shaping, and multiplexing of real-time MPEG II video

    NASA Astrophysics Data System (ADS)

    Agrawal, Sanjay; Barry, Charles F.; Binnai, Vinay; Kazovsky, Leonid G.

    1997-01-01

    We obtain network traffic model for real-time MPEG-II encoded digital video by analyzing video stream samples from real-time encoders from NUKO Information Systems. MPEG-II sample streams include a resolution intensive movie, City of Joy, an action intensive movie, Aliens, a luminance intensive (black and white) movie, Road To Utopia, and a chrominance intensive (color) movie, Dick Tracy. From our analysis we obtain a heuristic model for the encoded video traffic which uses a 15-stage Markov process to model the I,B,P frame sequences within a group of pictures (GOP). A jointly-correlated Gaussian process is used to model the individual frame sizes. Scene change arrivals are modeled according to a gamma process. Simulations show that our MPEG-II traffic model generates, I,B,P frame sequences and frame sizes that closely match the sample MPEG-II stream traffic characteristics as they relate to latency and buffer occupancy in network queues. To achieve high multiplexing efficiency we propose a traffic shaping scheme which sets preferred 1-frame generation times among a group of encoders so as to minimize the overall variation in total offered traffic while still allowing the individual encoders to react to scene changes. Simulations show that our scheme results in multiplexing gains of up to 10% enabling us to multiplex twenty 6 Mbps MPEG-II video streams instead of 18 streams over an ATM/SONET OC3 link without latency or cell loss penalty. This scheme is due for a patent.

  17. Intergraph video and images exploitation capabilities

    NASA Astrophysics Data System (ADS)

    Colla, Simone; Manesis, Charalampos

    2013-08-01

    The current paper focuses on the capture, fusion and process of aerial imagery in order to leverage full motion video, giving analysts the ability to collect, analyze, and maximize the value of video assets. Unmanned aerial vehicles (UAV) have provided critical real-time surveillance and operational support to military organizations, and are a key source of intelligence, particularly when integrated with other geospatial data. In the current workflow, at first, the UAV operators plan the flight by using a flight planning software. During the flight the UAV send a live video stream directly on the field to be processed by Intergraph software, to generate and disseminate georeferenced images trough a service oriented architecture based on ERDAS Apollo suite. The raw video-based data sources provide the most recent view of a situation and can augment other forms of geospatial intelligence - such as satellite imagery and aerial photos - to provide a richer, more detailed view of the area of interest. To effectively use video as a source of intelligence, however, the analyst needs to seamlessly fuse the video with these other types of intelligence, such as map features and annotations. Intergraph has developed an application that automatically generates mosaicked georeferenced image, tags along the video route which can then be seamlessly integrated with other forms of static data, such as aerial photos, satellite imagery, or geospatial layers and features. Consumers will finally have the ability to use a single, streamlined system to complete the entire geospatial information lifecycle: capturing geospatial data using sensor technology; processing vector, raster, terrain data into actionable information; managing, fusing, and sharing geospatial data and video toghether; and finally, rapidly and securely delivering integrated information products, ensuring individuals can make timely decisions.

  18. Increasing Speed of Processing With Action Video Games

    PubMed Central

    Dye, Matthew W.G.; Green, C. Shawn; Bavelier, Daphne

    2010-01-01

    In many everyday situations, speed is of the essence. However, fast decisions typically mean more mistakes. To this day, it remains unknown whether reaction times can be reduced with appropriate training, within one individual, across a range of tasks, and without compromising accuracy. Here we review evidence that the very act of playing action video games significantly reduces reaction times without sacrificing accuracy. Critically, this increase in speed is observed across various tasks beyond game situations. Video gaming may therefore provide an efficient training regimen to induce a general speeding of perceptual reaction times without decreases in accuracy of performance. PMID:20485453

  19. Video-assisted palatopharyngeal surgery: a model for improved education and training.

    PubMed

    Allori, Alexander C; Marcus, Jeffrey R; Daluvoy, Sanjay; Bond, Jennifer

    2014-09-01

    Objective : The learning process for intraoral procedures is arguably more difficult than for other surgical procedures because of the assistant's severely limited visibility. Consequently, trainees may not be able to adequately see and follow all steps of the procedure, and attending surgeons may be less willing to entrust trainees with critical portions of the procedure. In this report, we propose a video-assisted approach to intraoral procedures that improves lighting, visibility, and potential for effective education and training. Design : Technical report (idea/innovation). Setting : Tertiary referral hospital. Patients : Children with cleft palate and velopharyngeal insufficiency requiring surgery. Interventions : Video-assisted palatoplasty, sphincteroplasty, and pharyngoplasty. Main Outcome Measures : Qualitative and semiquantitative educational outcomes, including learner perception regarding "real-time" (video-assisted surgery) and "non-real-time" (video-library-based) surgical education. Results : Trainees were strongly in favor of the video-assisted modality in "real-time" surgical training. Senior trainees identified more opportunities in which they had been safely entrusted to perform critical portions of the procedure, corresponding with satisfaction with the learning process scores, and they showed greater comfort/confidence scores related to performing the procedure under supervision and alone. Conclusions : Adoption of the video-assisted approach can be expected to markedly improve the learning curve for surgeons in training. This is now standard practice at our institution. We are presently conducting a full educational technology assessment to better characterize the effect on knowledge acquisition and technical improvement.

  20. Recognising safety critical events: can automatic video processing improve naturalistic data analyses?

    PubMed

    Dozza, Marco; González, Nieves Pañeda

    2013-11-01

    New trends in research on traffic accidents include Naturalistic Driving Studies (NDS). NDS are based on large scale data collection of driver, vehicle, and environment information in real world. NDS data sets have proven to be extremely valuable for the analysis of safety critical events such as crashes and near crashes. However, finding safety critical events in NDS data is often difficult and time consuming. Safety critical events are currently identified using kinematic triggers, for instance searching for deceleration below a certain threshold signifying harsh braking. Due to the low sensitivity and specificity of this filtering procedure, manual review of video data is currently necessary to decide whether the events identified by the triggers are actually safety critical. Such reviewing procedure is based on subjective decisions, is expensive and time consuming, and often tedious for the analysts. Furthermore, since NDS data is exponentially growing over time, this reviewing procedure may not be viable anymore in the very near future. This study tested the hypothesis that automatic processing of driver video information could increase the correct classification of safety critical events from kinematic triggers in naturalistic driving data. Review of about 400 video sequences recorded from the events, collected by 100 Volvo cars in the euroFOT project, suggested that drivers' individual reaction may be the key to recognize safety critical events. In fact, whether an event is safety critical or not often depends on the individual driver. A few algorithms, able to automatically classify driver reaction from video data, have been compared. The results presented in this paper show that the state of the art subjective review procedures to identify safety critical events from NDS can benefit from automated objective video processing. In addition, this paper discusses the major challenges in making such video analysis viable for future NDS and new potential applications for NDS video processing. As new NDS such as SHRP2 are now providing the equivalent of five years of one vehicle data each day, the development of new methods, such as the one proposed in this paper, seems necessary to guarantee that these data can actually be analysed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Video techniques and data compared with observation in emergency trauma care

    PubMed Central

    Mackenzie, C; Xiao, Y

    2003-01-01

    Video recording is underused in improving patient safety and understanding performance shaping factors in patient care. We report our experience of using video recording techniques in a trauma centre, including how to gain cooperation of clinicians for video recording of their workplace performance, identify strengths of video compared with observation, and suggest processes for consent and maintenance of confidentiality of video records. Video records are a rich source of data for documenting clinician performance which reveal safety and systems issues not identified by observation. Emergency procedures and video records of critical events identified patient safety, clinical, quality assurance, systems failures, and ergonomic issues. Video recording is a powerful feedback and training tool and provides a reusable record of events that can be repeatedly reviewed and used as research data. It allows expanded analyses of time critical events, trauma resuscitation, anaesthesia, and surgical tasks. To overcome some of the key obstacles in deploying video recording techniques, researchers should (1) develop trust with video recorded subjects, (2) obtain clinician participation for introduction of a new protocol or line of investigation, (3) report aggregated video recorded data and use clinician reviews for feedback on covert processes and cognitive analyses, and (4) involve multidisciplinary experts in medicine and nursing. PMID:14645896

  2. Enhanced visual short-term memory in action video game players.

    PubMed

    Blacker, Kara J; Curby, Kim M

    2013-08-01

    Visual short-term memory (VSTM) is critical for acquiring visual knowledge and shows marked individual variability. Previous work has illustrated a VSTM advantage among action video game players (Boot et al. Acta Psychologica 129:387-398, 2008). A growing body of literature has suggested that action video game playing can bolster visual cognitive abilities in a domain-general manner, including abilities related to visual attention and the speed of processing, providing some potential bases for this VSTM advantage. In the present study, we investigated the VSTM advantage among video game players and assessed whether enhanced processing speed can account for this advantage. Experiment 1, using simple colored stimuli, revealed that action video game players demonstrate a similar VSTM advantage over nongamers, regardless of whether they are given limited or ample time to encode items into memory. Experiment 2, using complex shapes as the stimuli to increase the processing demands of the task, replicated this VSTM advantage, irrespective of encoding duration. These findings are inconsistent with a speed-of-processing account of this advantage. An alternative, attentional account, grounded in the existing literature on the visuo-cognitive consequences of video game play, is discussed.

  3. Low-complexity image processing for real-time detection of neonatal clonic seizures.

    PubMed

    Ntonfo, Guy Mathurin Kouamou; Ferrari, Gianluigi; Raheli, Riccardo; Pisani, Francesco

    2012-05-01

    In this paper, we consider a novel low-complexity real-time image-processing-based approach to the detection of neonatal clonic seizures. Our approach is based on the extraction, from a video of a newborn, of an average luminance signal representative of the body movements. Since clonic seizures are characterized by periodic movements of parts of the body (e.g., the limbs), by evaluating the periodicity of the extracted average luminance signal it is possible to detect the presence of a clonic seizure. The periodicity is investigated, through a hybrid autocorrelation-Yin estimation technique, on a per-window basis, where a time window is defined as a sequence of consecutive video frames. While processing is first carried out on a single window basis, we extend our approach to interlaced windows. The performance of the proposed detection algorithm is investigated, in terms of sensitivity and specificity, through receiver operating characteristic curves, considering video recordings of newborns affected by neonatal seizures.

  4. Replacing Non-Active Video Gaming by Active Video Gaming to Prevent Excessive Weight Gain in Adolescents

    PubMed Central

    Simons, Monique; Brug, Johannes; Chinapaw, Mai J. M.; de Boer, Michiel; Seidell, Jaap; de Vet, Emely

    2015-01-01

    Objective The aim of the current study was to evaluate the effects of and adherence to an active video game promotion intervention on anthropometrics, sedentary screen time and consumption of sugar-sweetened beverages and snacks among non-active video gaming adolescents who primarily were of healthy weight. Methods We assigned 270 gaming (i.e. ≥2 hours/week non-active video game time) adolescents randomly to an intervention group (n = 140) (receiving active video games and encouragement to play) or a waiting-list control group (n = 130). BMI-SDS (SDS = adjusted for mean standard deviation score), waist circumference-SDS, hip circumference and sum of skinfolds were measured at baseline, at four and ten months follow-up (primary outcomes). Sedentary screen time, physical activity, consumption of sugar-sweetened beverages and snacks, and process measures (not at baseline) were assessed with self-reports at baseline, one, four and ten months follow-up. Multi-level-intention to treat-regression analyses were conducted. Results The control group decreased significantly more than the intervention group on BMI-SDS (β = 0.074, 95%CI: 0.008;0.14), and sum of skinfolds (β = 3.22, 95%CI: 0.27;6.17) (overall effects). The intervention group had a significantly higher decrease in self-reported non-active video game time (β = -1.76, 95%CI: -3.20;-0.32) and total sedentary screen time (Exp (β = 0.81, 95%CI: 0.74;0.88) than the control group (overall effects). The process evaluation showed that 14% of the adolescents played the Move video games every week ≥1 hour/week during the whole intervention period. Conclusions The active video game intervention did not result in lower values on anthropometrics in a group of ‘excessive’ non-active video gamers (mean ~ 14 hours/week) who primarily were of healthy weight compared to a control group throughout a ten-month-period. Even some effects in the unexpected direction were found, with the control group showing lower BMI-SDS and skin folds than the intervention group. The intervention did result in less self-reported sedentary screen time, although these results are likely biased by social desirability. Trial Registration Dutch Trial Register NTR3228 PMID:26153884

  5. Do Video Reviews of Therapy Sessions Help People with Mild Intellectual Disabilities Describe Their Perceptions of Cognitive Behaviour Therapy?

    ERIC Educational Resources Information Center

    Burford, B.; Jahoda, A.

    2012-01-01

    Background: This study examined the potential of a retrospective video reviewing process [Burford Reviewing Process (BRP)] for enabling people with intellectual disabilities to describe their experiences of cognitive behaviour therapy (CBT). It is the first time that the BRP, described in this paper, has been used with people with intellectual…

  6. Data streaming in telepresence environments.

    PubMed

    Lamboray, Edouard; Würmlin, Stephan; Gross, Markus

    2005-01-01

    In this paper, we discuss data transmission in telepresence environments for collaborative virtual reality applications. We analyze data streams in the context of networked virtual environments and classify them according to their traffic characteristics. Special emphasis is put on geometry-enhanced (3D) video. We review architectures for real-time 3D video pipelines and derive theoretical bounds on the minimal system latency as a function of the transmission and processing delays. Furthermore, we discuss bandwidth issues of differential update coding for 3D video. In our telepresence system-the blue-c-we use a point-based 3D video technology which allows for differentially encoded 3D representations of human users. While we discuss the considerations which lead to the design of our three-stage 3D video pipeline, we also elucidate some critical implementation details regarding decoupling of acquisition, processing and rendering frame rates, and audio/video synchronization. Finally, we demonstrate the communication and networking features of the blue-c system in its full deployment. We show how the system can possibly be controlled to face processing or networking bottlenecks by adapting the multiple system components like audio, application data, and 3D video.

  7. Heterogeneous CPU-GPU moving targets detection for UAV video

    NASA Astrophysics Data System (ADS)

    Li, Maowen; Tang, Linbo; Han, Yuqi; Yu, Chunlei; Zhang, Chao; Fu, Huiquan

    2017-07-01

    Moving targets detection is gaining popularity in civilian and military applications. On some monitoring platform of motion detection, some low-resolution stationary cameras are replaced by moving HD camera based on UAVs. The pixels of moving targets in the HD Video taken by UAV are always in a minority, and the background of the frame is usually moving because of the motion of UAVs. The high computational cost of the algorithm prevents running it at higher resolutions the pixels of frame. Hence, to solve the problem of moving targets detection based UAVs video, we propose a heterogeneous CPU-GPU moving target detection algorithm for UAV video. More specifically, we use background registration to eliminate the impact of the moving background and frame difference to detect small moving targets. In order to achieve the effect of real-time processing, we design the solution of heterogeneous CPU-GPU framework for our method. The experimental results show that our method can detect the main moving targets from the HD video taken by UAV, and the average process time is 52.16ms per frame which is fast enough to solve the problem.

  8. Fast and predictable video compression in software design and implementation of an H.261 codec

    NASA Astrophysics Data System (ADS)

    Geske, Dagmar; Hess, Robert

    1998-09-01

    The use of software codecs for video compression becomes commonplace in several videoconferencing applications. In order to reduce conflicts with other applications used at the same time, mechanisms for resource reservation on endsystems need to determine an upper bound for computing time used by the codec. This leads to the demand for predictable execution times of compression/decompression. Since compression schemes as H.261 inherently depend on the motion contained in the video, an adaptive admission control is required. This paper presents a data driven approach based on dynamical reduction of the number of processed macroblocks in peak situations. Besides the absolute speed is a point of interest. The question, whether and how software compression of high quality video is feasible on today's desktop computers, is examined.

  9. Real-time demonstration hardware for enhanced DPCM video compression algorithm

    NASA Technical Reports Server (NTRS)

    Bizon, Thomas P.; Whyte, Wayne A., Jr.; Marcopoli, Vincent R.

    1992-01-01

    The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development along with implementation of a buffer control algorithm to accommodate the variable data rate output of the multilevel Huffman encoder. A video CODEC of this type could be used to compress NTSC color television signals where high quality reconstruction is desirable (e.g., Space Station video transmission, transmission direct-to-the-home via direct broadcast satellite systems or cable television distribution to system headends and direct-to-the-home).

  10. Real-time high-level video understanding using data warehouse

    NASA Astrophysics Data System (ADS)

    Lienard, Bruno; Desurmont, Xavier; Barrie, Bertrand; Delaigle, Jean-Francois

    2006-02-01

    High-level Video content analysis such as video-surveillance is often limited by computational aspects of automatic image understanding, i.e. it requires huge computing resources for reasoning processes like categorization and huge amount of data to represent knowledge of objects, scenarios and other models. This article explains how to design and develop a "near real-time adaptive image datamart", used, as a decisional support system for vision algorithms, and then as a mass storage system. Using RDF specification as storing format of vision algorithms meta-data, we can optimise the data warehouse concepts for video analysis, add some processes able to adapt the current model and pre-process data to speed-up queries. In this way, when new data is sent from a sensor to the data warehouse for long term storage, using remote procedure call embedded in object-oriented interfaces to simplified queries, they are processed and in memory data-model is updated. After some processing, possible interpretations of this data can be returned back to the sensor. To demonstrate this new approach, we will present typical scenarios applied to this architecture such as people tracking and events detection in a multi-camera network. Finally we will show how this system becomes a high-semantic data container for external data-mining.

  11. Extraction and analysis of neuron firing signals from deep cortical video microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerekes, Ryan A; Blundon, Jay

    We introduce a method for extracting and analyzing neuronal activity time signals from video of the cortex of a live animal. The signals correspond to the firing activity of individual cortical neurons. Activity signals are based on the changing fluorescence of calcium indicators in the cells over time. We propose a cell segmentation method that relies on a user-specified center point, from which the signal extraction method proceeds. A stabilization approach is used to reduce tissue motion in the video. The extracted signal is then processed to flatten the baseline and detect action potentials. We show results from applying themore » method to a cortical video of a live mouse.« less

  12. Seeing Change in Time: Video Games to Teach about Temporal Change in Scientific Phenomena

    ERIC Educational Resources Information Center

    Corredor, Javier; Gaydos, Matthew; Squire, Kurt

    2014-01-01

    This article explores how learning biological concepts can be facilitated by playing a video game that depicts interactions and processes at the subcellular level. Particularly, this article reviews the effects of a real-time strategy game that requires players to control the behavior of a virus and interact with cell structures in a way that…

  13. A low delay transmission method of multi-channel video based on FPGA

    NASA Astrophysics Data System (ADS)

    Fu, Weijian; Wei, Baozhi; Li, Xiaobin; Wang, Quan; Hu, Xiaofei

    2018-03-01

    In order to guarantee the fluency of multi-channel video transmission in video monitoring scenarios, we designed a kind of video format conversion method based on FPGA and its DMA scheduling for video data, reduces the overall video transmission delay.In order to sace the time in the conversion process, the parallel ability of FPGA is used to video format conversion. In order to improve the direct memory access (DMA) writing transmission rate of PCIe bus, a DMA scheduling method based on asynchronous command buffer is proposed. The experimental results show that this paper designs a low delay transmission method based on FPGA, which increases the DMA writing transmission rate by 34% compared with the existing method, and then the video overall delay is reduced to 23.6ms.

  14. Design and Implementation of a Video-Zoom Driven Digital Audio-Zoom System for Portable Digital Imaging Devices

    NASA Astrophysics Data System (ADS)

    Park, Nam In; Kim, Seon Man; Kim, Hong Kook; Kim, Ji Woon; Kim, Myeong Bo; Yun, Su Won

    In this paper, we propose a video-zoom driven audio-zoom algorithm in order to provide audio zooming effects in accordance with the degree of video-zoom. The proposed algorithm is designed based on a super-directive beamformer operating with a 4-channel microphone system, in conjunction with a soft masking process that considers the phase differences between microphones. Thus, the audio-zoom processed signal is obtained by multiplying an audio gain derived from a video-zoom level by the masked signal. After all, a real-time audio-zoom system is implemented on an ARM-CORETEX-A8 having a clock speed of 600 MHz after different levels of optimization are performed such as algorithmic level, C-code, and memory optimizations. To evaluate the complexity of the proposed real-time audio-zoom system, test data whose length is 21.3 seconds long is sampled at 48 kHz. As a result, it is shown from the experiments that the processing time for the proposed audio-zoom system occupies 14.6% or less of the ARM clock cycles. It is also shown from the experimental results performed in a semi-anechoic chamber that the signal with the front direction can be amplified by approximately 10 dB compared to the other directions.

  15. Hyperspectral processing in graphical processing units

    NASA Astrophysics Data System (ADS)

    Winter, Michael E.; Winter, Edwin M.

    2011-06-01

    With the advent of the commercial 3D video card in the mid 1990s, we have seen an order of magnitude performance increase with each generation of new video cards. While these cards were designed primarily for visualization and video games, it became apparent after a short while that they could be used for scientific purposes. These Graphical Processing Units (GPUs) are rapidly being incorporated into data processing tasks usually reserved for general purpose computers. It has been found that many image processing problems scale well to modern GPU systems. We have implemented four popular hyperspectral processing algorithms (N-FINDR, linear unmixing, Principal Components, and the RX anomaly detection algorithm). These algorithms show an across the board speedup of at least a factor of 10, with some special cases showing extreme speedups of a hundred times or more.

  16. Collaborative web-based annotation of video footage of deep-sea life, ecosystems and geological processes

    NASA Astrophysics Data System (ADS)

    Kottmann, R.; Ratmeyer, V.; Pop Ristov, A.; Boetius, A.

    2012-04-01

    More and more seagoing scientific expeditions use video-controlled research platforms such as Remote Operating Vehicles (ROV), Autonomous Underwater Vehicles (AUV), and towed camera systems. These produce many hours of video material which contains detailed and scientifically highly valuable footage of the biological, chemical, geological, and physical aspects of the oceans. Many of the videos contain unique observations of unknown life-forms which are rare, and which cannot be sampled and studied otherwise. To make such video material online accessible and to create a collaborative annotation environment the "Video Annotation and processing platform" (V-App) was developed. A first solely web-based installation for ROV videos is setup at the German Center for Marine Environmental Sciences (available at http://videolib.marum.de). It allows users to search and watch videos with a standard web browser based on the HTML5 standard. Moreover, V-App implements social web technologies allowing a distributed world-wide scientific community to collaboratively annotate videos anywhere at any time. It has several features fully implemented among which are: • User login system for fine grained permission and access control • Video watching • Video search using keywords, geographic position, depth and time range and any combination thereof • Video annotation organised in themes (tracks) such as biology and geology among others in standard or full screen mode • Annotation keyword management: Administrative users can add, delete, and update single keywords for annotation or upload sets of keywords from Excel-sheets • Download of products for scientific use This unique web application system helps making costly ROV videos online available (estimated cost range between 5.000 - 10.000 Euros per hour depending on the combination of ship and ROV). Moreover, with this system each expert annotation adds instantaneous available and valuable knowledge to otherwise uncharted material.

  17. Video-guided calibration of an augmented reality mobile C-arm.

    PubMed

    Chen, Xin; Naik, Hemal; Wang, Lejing; Navab, Nassir; Fallavollita, Pascal

    2014-11-01

    The augmented reality (AR) fluoroscope augments an X-ray image by video and provides the surgeon with a real-time in situ overlay of the anatomy. The overlay alignment is crucial for diagnostic and intra-operative guidance, so precise calibration of the AR fluoroscope is required. The first and most complex step of the calibration procedure is the determination of the X-ray source position. Currently, this is achieved using a biplane phantom with movable metallic rings on its top layer and fixed X-ray opaque markers on its bottom layer. The metallic rings must be moved to positions where at least two pairs of rings and markers are isocentric in the X-ray image. The current "trial and error" calibration process currently requires acquisition of many X-ray images, a task that is both time consuming and radiation intensive. An improved process was developed and tested for C-arm calibration. Video guidance was used to drive the calibration procedure to minimize both X-ray exposure and the time involved. For this, a homography between X-ray and video images is estimated. This homography is valid for the plane at which the metallic rings are positioned and is employed to guide the calibration procedure. Eight users having varying calibration experience (i.e., 2 experts, 2 semi-experts, 4 novices) were asked to participate in the evaluation. The video-guided technique reduced the number of intra-operative X-ray calibration images by 89% and decreased the total time required by 59%. A video-based C-arm calibration method has been developed that improves the usability of the AR fluoroscope with a friendlier interface, reduced calibration time and clinically acceptable radiation doses.

  18. An integrated multispectral video and environmental monitoring system for the study of coastal processes and the support of beach management operations

    NASA Astrophysics Data System (ADS)

    Ghionis, George; Trygonis, Vassilis; Karydis, Antonis; Vousdoukas, Michalis; Alexandrakis, George; Drakopoulos, Panos; Amdreadis, Olympos; Psarros, Fotis; Velegrakis, Antonis; Poulos, Serafim

    2016-04-01

    Effective beach management requires environmental assessments that are based on sound science, are cost-effective and are available to beach users and managers in an accessible, timely and transparent manner. The most common problems are: 1) The available field data are scarce and of sub-optimal spatio-temporal resolution and coverage, 2) our understanding of local beach processes needs to be improved in order to accurately model/forecast beach dynamics under a changing climate, and 3) the information provided by coastal scientists/engineers in the form of data, models and scientific interpretation is often too complicated to be of direct use by coastal managers/decision makers. A multispectral video system has been developed, consisting of one or more video cameras operating in the visible part of the spectrum, a passive near-infrared (NIR) camera, an active NIR camera system, a thermal infrared camera and a spherical video camera, coupled with innovative image processing algorithms and a telemetric system for the monitoring of coastal environmental parameters. The complete system has the capability to record, process and communicate (in quasi-real time) high frequency information on shoreline position, wave breaking zones, wave run-up, erosion hot spots along the shoreline, nearshore wave height, turbidity, underwater visibility, wind speed and direction, air and sea temperature, solar radiation, UV radiation, relative humidity, barometric pressure and rainfall. An innovative, remotely-controlled interactive visual monitoring system, based on the spherical video camera (with 360°field of view), combines the video streams from all cameras and can be used by beach managers to monitor (in real time) beach user numbers, flow activities and safety at beaches of high touristic value. The high resolution near infrared cameras permit 24-hour monitoring of beach processes, while the thermal camera provides information on beach sediment temperature and moisture, can detect upwelling in the nearshore zone, and enhances the safety of beach users. All data can be presented in real- or quasi-real time and are stored for future analysis and training/validation of coastal processes models. Acknowledgements: This work was supported by the project BEACHTOUR (11SYN-8-1466) of the Operational Program "Cooperation 2011, Competitiveness and Entrepreneurship", co-funded by the European Regional Development Fund and the Greek Ministry of Education and Religious Affairs.

  19. Task–Technology Fit of Video Telehealth for Nurses in an Outpatient Clinic Setting

    PubMed Central

    Finkelstein, Stanley M.

    2014-01-01

    Abstract Background: Incorporating telehealth into outpatient care delivery supports management of consumer health between clinic visits. Task–technology fit is a framework for understanding how technology helps and/or hinders a person during work processes. Evaluating the task–technology fit of video telehealth for personnel working in a pediatric outpatient clinic and providing care between clinic visits ensures the information provided matches the information needed to support work processes. Materials and Methods: The workflow of advanced practice registered nurse (APRN) care coordination provided via telephone and video telehealth was described and measured using a mixed-methods workflow analysis protocol that incorporated cognitive ethnography and time–motion study. Qualitative and quantitative results were merged and analyzed within the task–technology fit framework to determine the workflow fit of video telehealth for APRN care coordination. Results: Incorporating video telehealth into APRN care coordination workflow provided visual information unavailable during telephone interactions. Despite additional tasks and interactions needed to obtain the visual information, APRN workflow efficiency, as measured by time, was not significantly changed. Analyzed within the task–technology fit framework, the increased visual information afforded by video telehealth supported the assessment and diagnostic information needs of the APRN. Conclusions: Telehealth must provide the right information to the right clinician at the right time. Evaluating task–technology fit using a mixed-methods protocol ensured rigorous analysis of fit within work processes and identified workflows that benefit most from the technology. PMID:24841219

  20. Automated Visual Event Detection, Tracking, and Data Management System for Cabled- Observatory Video

    NASA Astrophysics Data System (ADS)

    Edgington, D. R.; Cline, D. E.; Schlining, B.; Raymond, E.

    2008-12-01

    Ocean observatories and underwater video surveys have the potential to unlock important discoveries with new and existing camera systems. Yet the burden of video management and analysis often requires reducing the amount of video recorded through time-lapse video or similar methods. It's unknown how many digitized video data sets exist in the oceanographic community, but we suspect that many remain under analyzed due to lack of good tools or human resources to analyze the video. To help address this problem, the Automated Visual Event Detection (AVED) software and The Video Annotation and Reference System (VARS) have been under development at MBARI. For detecting interesting events in the video, the AVED software has been developed over the last 5 years. AVED is based on a neuromorphic-selective attention algorithm, modeled on the human vision system. Frames are decomposed into specific feature maps that are combined into a unique saliency map. This saliency map is then scanned to determine the most salient locations. The candidate salient locations are then segmented from the scene using algorithms suitable for the low, non-uniform light and marine snow typical of deep underwater video. For managing the AVED descriptions of the video, the VARS system provides an interface and database for describing, viewing, and cataloging the video. VARS was developed by the MBARI for annotating deep-sea video data and is currently being used to describe over 3000 dives by our remotely operated vehicles (ROV), making it well suited to this deepwater observatory application with only a few modifications. To meet the compute and data intensive job of video processing, a distributed heterogeneous network of computers is managed using the Condor workload management system. This system manages data storage, video transcoding, and AVED processing. Looking to the future, we see high-speed networks and Grid technology as an important element in addressing the problem of processing and accessing large video data sets.

  1. Dynamic video encryption algorithm for H.264/AVC based on a spatiotemporal chaos system.

    PubMed

    Xu, Hui; Tong, Xiao-Jun; Zhang, Miao; Wang, Zhu; Li, Ling-Hao

    2016-06-01

    Video encryption schemes mostly employ the selective encryption method to encrypt parts of important and sensitive video information, aiming to ensure the real-time performance and encryption efficiency. The classic block cipher is not applicable to video encryption due to the high computational overhead. In this paper, we propose the encryption selection control module to encrypt video syntax elements dynamically which is controlled by the chaotic pseudorandom sequence. A novel spatiotemporal chaos system and binarization method is used to generate a key stream for encrypting the chosen syntax elements. The proposed scheme enhances the resistance against attacks through the dynamic encryption process and high-security stream cipher. Experimental results show that the proposed method exhibits high security and high efficiency with little effect on the compression ratio and time cost.

  2. Video training with peer feedback in real-time consultation: acceptability and feasibility in a general-practice setting.

    PubMed

    Eeckhout, Thomas; Gerits, Michiel; Bouquillon, Dries; Schoenmakers, Birgitte

    2016-08-01

    Since many years, teaching and training in communication skills are cornerstones in the medical education curriculum. Although video recording in a real-time consultation is expected to positively contribute to the learning process, research on this topic is scarce. This study will focus on the feasibility and acceptability of video recording during real-time patient encounters performed by general practitioner (GP) trainees. The primary research question addressed the experiences (defined as feasibility and acceptability) of GP trainees in video-recorded vocational training in a general practice. The second research question addressed the appraisal of this training. The procedure of video-recorded training is developed, refined and validated by the Academic Teaching Practice of Leuven since 1974 (Faculty of Medicine of the University of Leuven). The study is set up as a cross-sectional survey without follow-up. Outcome measures were defined as 'feasibility and acceptability' (experiences of trainees) of the video-recorded training and were approached by a structured questionnaire with the opportunity to add free text comments. The studied sample consisted of all first-phase trainees of the GP Master 2011-2012 at the University of Leuven. Almost 70% of the trainees were positive about recording consultations. Nevertheless, over 60% believed that patients felt uncomfortable during the video-recorded encounter. Almost 90% noticed an improvement of own communication skills through the observation and evaluation of. Most students (85%) experienced the logistical issues as major barrier to perform video consultations on a regular base. This study lays the foundation stone for further exploration of the video training in real-time consultations. Both students and teachers on the field acknowledge that the power of imaging is underestimated in the training of communication and vocational skills. The development of supportive material and protocols will lower thresholds. Time investment for teachers could be tempered by bringing up students to peer tutors and by an accurate scheduling of the video training. The development of supportive material and protocols will lower thresholds. Further research should finally focus on long-term efficacy and efficiency in terms of learning outcomes and on the facilitation of the technical process. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  3. Video training with peer feedback in real-time consultation: acceptability and feasibility in a general-practice setting

    PubMed Central

    Eeckhout, Thomas; Gerits, Michiel; Bouquillon, Dries; Schoenmakers, Birgitte

    2016-01-01

    Objective Since many years, teaching and training in communication skills are cornerstones in the medical education curriculum. Although video recording in a real-time consultation is expected to positively contribute to the learning process, research on this topic is scarce. This study will focus on the feasibility and acceptability of video recording during real-time patient encounters performed by general practitioner (GP) trainees. Method The primary research question addressed the experiences (defined as feasibility and acceptability) of GP trainees in video-recorded vocational training in a general practice. The second research question addressed the appraisal of this training. The procedure of video-recorded training is developed, refined and validated by the Academic Teaching Practice of Leuven since 1974 (Faculty of Medicine of the University of Leuven). The study is set up as a cross-sectional survey without follow-up. Outcome measures were defined as ‘feasibility and acceptability’ (experiences of trainees) of the video-recorded training and were approached by a structured questionnaire with the opportunity to add free text comments. The studied sample consisted of all first-phase trainees of the GP Master 2011–2012 at the University of Leuven. Results Almost 70% of the trainees were positive about recording consultations. Nevertheless, over 60% believed that patients felt uncomfortable during the video-recorded encounter. Almost 90% noticed an improvement of own communication skills through the observation and evaluation of. Most students (85%) experienced the logistical issues as major barrier to perform video consultations on a regular base. Conclusions This study lays the foundation stone for further exploration of the video training in real-time consultations. Both students and teachers on the field acknowledge that the power of imaging is underestimated in the training of communication and vocational skills. The development of supportive material and protocols will lower thresholds. Practice implications Time investment for teachers could be tempered by bringing up students to peer tutors and by an accurate scheduling of the video training. The development of supportive material and protocols will lower thresholds. Further research should finally focus on long-term efficacy and efficiency in terms of learning outcomes and on the facilitation of the technical process. PMID:26842970

  4. Avionics-compatible video facial cognizer for detection of pilot incapacitation.

    PubMed

    Steffin, Morris

    2006-01-01

    High-acceleration loss of consciousness is a serious problem for military pilots. In this laboratory, a video cognizer has been developed that in real time detects facial changes closely coupled to the onset of loss of consciousness. Efficient algorithms are compatible with video digital signal processing hardware and are thus configurable on an autonomous single board that generates alarm triggers to activate autopilot, and is avionics-compatible.

  5. JPRS Report, Soviet Union, Political Affairs.

    DTIC Science & Technology

    1990-07-07

    an increase in video rental places clubs and video viewing salons. In Kiev alone, there are more than 200 of them. Especially disquieting is the...In the process of being questioned, they explained they committed criminal acts under the influence of videos . We are also disturbed by the...grade publications, at times, I would even say, with an after- taste of " porno " cannot be but disturbing. Remember our history. Ivan Dmitriyevich

  6. Motion-based video monitoring for early detection of livestock diseases: The case of African swine fever

    PubMed Central

    Martínez-Avilés, Marta; Ivorra, Benjamin; Martínez-López, Beatriz; Ramos, Ángel Manuel; Sánchez-Vizcaíno, José Manuel

    2017-01-01

    Early detection of infectious diseases can substantially reduce the health and economic impacts on livestock production. Here we describe a system for monitoring animal activity based on video and data processing techniques, in order to detect slowdown and weakening due to infection with African swine fever (ASF), one of the most significant threats to the pig industry. The system classifies and quantifies motion-based animal behaviour and daily activity in video sequences, allowing automated and non-intrusive surveillance in real-time. The aim of this system is to evaluate significant changes in animals’ motion after being experimentally infected with ASF virus. Indeed, pig mobility declined progressively and fell significantly below pre-infection levels starting at four days after infection at a confidence level of 95%. Furthermore, daily motion decreased in infected animals by approximately 10% before the detection of the disease by clinical signs. These results show the promise of video processing techniques for real-time early detection of livestock infectious diseases. PMID:28877181

  7. Video capture of clinical care to enhance patient safety

    PubMed Central

    Weinger, M; Gonzales, D; Slagle, J; Syeed, M

    2004-01-01

    

 Experience from other domains suggests that videotaping and analyzing actual clinical care can provide valuable insights for enhancing patient safety through improvements in the process of care. Methods are described for the videotaping and analysis of clinical care using a high quality portable multi-angle digital video system that enables simultaneous capture of vital signs and time code synchronization of all data streams. An observer can conduct clinician performance assessment (such as workload measurements or behavioral task analysis) either in real time (during videotaping) or while viewing previously recorded videotapes. Supplemental data are synchronized with the video record and stored electronically in a hierarchical database. The video records are transferred to DVD, resulting in a small, cheap, and accessible archive. A number of technical and logistical issues are discussed, including consent of patients and clinicians, maintaining subject privacy and confidentiality, and data security. Using anesthesiology as a test environment, over 270 clinical cases (872 hours) have been successfully videotaped and processed using the system. PMID:15069222

  8. StreaMorph: A Case for Synthesizing Energy-Efficient Adaptive Programs Using High-Level Abstractions

    DTIC Science & Technology

    2013-08-12

    technique when switching from using eight cores to one core. 1. Introduction Real - time streaming of media data is growing in popularity. This includes...both capture and processing of real - time video and audio, and delivery of video and audio from servers; recent usage number shows over 800 million...source of data, when that source is a real - time source, and it is generally not necessary to get ahead of the sink. Even with real - time sources and sinks

  9. Real-time detection and data acquisition system for the left ventricular outline. Ph.D. Thesis - Stanford Univ.

    NASA Technical Reports Server (NTRS)

    Reiber, J. H. C.

    1976-01-01

    To automate the data acquisition procedure, a real-time contour detection and data acquisition system for the left ventricular outline was developed using video techniques. The X-ray image of the contrast-filled left ventricle is stored for subsequent processing on film (cineangiogram), video tape or disc. The cineangiogram is converted into video format using a television camera. The video signal from either the TV camera, video tape or disc is the input signal to the system. The contour detection is based on a dynamic thresholding technique. Since the left ventricular outline is a smooth continuous function, for each contour side a narrow expectation window is defined in which the next borderpoint will be detected. A computer interface was designed and built for the online acquisition of the coordinates using a PDP-12 computer. The advantage of this system over other available systems is its potential for online, real-time acquisition of the left ventricular size and shape during angiocardiography.

  10. MPCM: a hardware coder for super slow motion video sequences

    NASA Astrophysics Data System (ADS)

    Alcocer, Estefanía; López-Granado, Otoniel; Gutierrez, Roberto; Malumbres, Manuel P.

    2013-12-01

    In the last decade, the improvements in VLSI levels and image sensor technologies have led to a frenetic rush to provide image sensors with higher resolutions and faster frame rates. As a result, video devices were designed to capture real-time video at high-resolution formats with frame rates reaching 1,000 fps and beyond. These ultrahigh-speed video cameras are widely used in scientific and industrial applications, such as car crash tests, combustion research, materials research and testing, fluid dynamics, and flow visualization that demand real-time video capturing at extremely high frame rates with high-definition formats. Therefore, data storage capability, communication bandwidth, processing time, and power consumption are critical parameters that should be carefully considered in their design. In this paper, we propose a fast FPGA implementation of a simple codec called modulo-pulse code modulation (MPCM) which is able to reduce the bandwidth requirements up to 1.7 times at the same image quality when compared with PCM coding. This allows current high-speed cameras to capture in a continuous manner through a 40-Gbit Ethernet point-to-point access.

  11. Engaging narratives evoke similar neural activity and lead to similar time perception.

    PubMed

    Cohen, Samantha S; Henin, Simon; Parra, Lucas C

    2017-07-04

    It is said that we lose track of time - that "time flies" - when we are engrossed in a story. How does engagement with the story cause this distorted perception of time, and what are its neural correlates? People commit both time and attentional resources to an engaging stimulus. For narrative videos, attentional engagement can be represented as the level of similarity between the electroencephalographic responses of different viewers. Here we show that this measure of neural engagement predicted the duration of time that viewers were willing to commit to narrative videos. Contrary to popular wisdom, engagement did not distort the average perception of time duration. Rather, more similar brain responses resulted in a more uniform perception of time across viewers. These findings suggest that by capturing the attention of an audience, narrative videos bring both neural processing and the subjective perception of time into synchrony.

  12. Design and Smartphone-Based Implementation of a Chaotic Video Communication Scheme via WAN Remote Transmission

    NASA Astrophysics Data System (ADS)

    Lin, Zhuosheng; Yu, Simin; Li, Chengqing; Lü, Jinhu; Wang, Qianxue

    This paper proposes a chaotic secure video remote communication scheme that can perform on real WAN networks, and implements it on a smartphone hardware platform. First, a joint encryption and compression scheme is designed by embedding a chaotic encryption scheme into the MJPG-Streamer source codes. Then, multiuser smartphone communications between the sender and the receiver are implemented via WAN remote transmission. Finally, the transmitted video data are received with the given IP address and port in an Android smartphone. It should be noted that, this is the first time that chaotic video encryption schemes are implemented on such a hardware platform. The experimental results demonstrate that the technical challenges on hardware implementation of secure video communication are successfully solved, reaching a balance amongst sufficient security level, real-time processing of massive video data, and utilization of available resources in the hardware environment. The proposed scheme can serve as a good application example of chaotic secure communications for smartphone and other mobile facilities in the future.

  13. Multimedia applications in nursing curriculum: the process of producing streaming videos for medication administration skills.

    PubMed

    Sowan, Azizeh K

    2014-07-01

    Streaming videos (SVs) are commonly used multimedia applications in clinical health education. However, there are several negative aspects related to the production and delivery of SVs. Only a few published studies have included sufficient descriptions of the videos and the production process and design innovations. This paper describes the production of innovative SVs for medication administration skills for undergraduate nursing students at a public university in Jordan and focuses on the ethical and cultural issues in producing this type of learning resource. The curriculum development committee approved the modification of educational techniques for medication administration procedures to include SVs within an interactive web-based learning environment. The production process of the videos adhered to established principles for "protecting patients' rights when filming and recording" and included: preproduction, production and postproduction phases. Medication administration skills were videotaped in a skills laboratory where they are usually taught to students and also in a hospital setting with real patients. The lab videos included critical points and Do's and Don'ts and the hospital videos fostered real-world practices. The range of time of the videos was reasonable to eliminate technical difficulty in access. Eight SVs were produced that covered different types of the medication administration skills. The production of SVs required the collaborative efforts of experts in IT, multimedia, nursing and informatics educators, and nursing care providers. Results showed that the videos were well-perceived by students, and the instructors who taught the course. The process of producing the videos in this project can be used as a valuable framework for schools considering utilizing multimedia applications in teaching. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  14. Achieving real-time capsule endoscopy (CE) video visualization through panoramic imaging

    NASA Astrophysics Data System (ADS)

    Yi, Steven; Xie, Jean; Mui, Peter; Leighton, Jonathan A.

    2013-02-01

    In this paper, we mainly present a novel and real-time capsule endoscopy (CE) video visualization concept based on panoramic imaging. Typical CE videos run about 8 hours and are manually reviewed by physicians to locate diseases such as bleedings and polyps. To date, there is no commercially available tool capable of providing stabilized and processed CE video that is easy to analyze in real time. The burden on physicians' disease finding efforts is thus big. In fact, since the CE camera sensor has a limited forward looking view and low image frame rate (typical 2 frames per second), and captures very close range imaging on the GI tract surface, it is no surprise that traditional visualization method based on tracking and registration often fails to work. This paper presents a novel concept for real-time CE video stabilization and display. Instead of directly working on traditional forward looking FOV (field of view) images, we work on panoramic images to bypass many problems facing traditional imaging modalities. Methods on panoramic image generation based on optical lens principle leading to real-time data visualization will be presented. In addition, non-rigid panoramic image registration methods will be discussed.

  15. Yellow River Icicle Hazard Dynamic Monitoring Using UAV Aerial Remote Sensing Technology

    NASA Astrophysics Data System (ADS)

    Wang, H. B.; Wang, G. H.; Tang, X. M.; Li, C. H.

    2014-02-01

    Monitoring the response of Yellow River icicle hazard change requires accurate and repeatable topographic surveys. A new method based on unmanned aerial vehicle (UAV) aerial remote sensing technology is proposed for real-time data processing in Yellow River icicle hazard dynamic monitoring. The monitoring area is located in the Yellow River ice intensive care area in southern BaoTou of Inner Mongolia autonomous region. Monitoring time is from the 20th February to 30th March in 2013. Using the proposed video data processing method, automatic extraction covering area of 7.8 km2 of video key frame image 1832 frames took 34.786 seconds. The stitching and correcting time was 122.34 seconds and the accuracy was better than 0.5 m. Through the comparison of precise processing of sequence video stitching image, the method determines the change of the Yellow River ice and locates accurate positioning of ice bar, improving the traditional visual method by more than 100 times. The results provide accurate aid decision information for the Yellow River ice prevention headquarters. Finally, the effect of dam break is repeatedly monitored and ice break five meter accuracy is calculated through accurate monitoring and evaluation analysis.

  16. Real-time strategy game training: emergence of a cognitive flexibility trait.

    PubMed

    Glass, Brian D; Maddox, W Todd; Love, Bradley C

    2013-01-01

    Training in action video games can increase the speed of perceptual processing. However, it is unknown whether video-game training can lead to broad-based changes in higher-level competencies such as cognitive flexibility, a core and neurally distributed component of cognition. To determine whether video gaming can enhance cognitive flexibility and, if so, why these changes occur, the current study compares two versions of a real-time strategy (RTS) game. Using a meta-analytic Bayes factor approach, we found that the gaming condition that emphasized maintenance and rapid switching between multiple information and action sources led to a large increase in cognitive flexibility as measured by a wide array of non-video gaming tasks. Theoretically, the results suggest that the distributed brain networks supporting cognitive flexibility can be tuned by engrossing video game experience that stresses maintenance and rapid manipulation of multiple information sources. Practically, these results suggest avenues for increasing cognitive function.

  17. Real-Time Strategy Game Training: Emergence of a Cognitive Flexibility Trait

    PubMed Central

    Glass, Brian D.; Maddox, W. Todd; Love, Bradley C.

    2013-01-01

    Training in action video games can increase the speed of perceptual processing. However, it is unknown whether video-game training can lead to broad-based changes in higher-level competencies such as cognitive flexibility, a core and neurally distributed component of cognition. To determine whether video gaming can enhance cognitive flexibility and, if so, why these changes occur, the current study compares two versions of a real-time strategy (RTS) game. Using a meta-analytic Bayes factor approach, we found that the gaming condition that emphasized maintenance and rapid switching between multiple information and action sources led to a large increase in cognitive flexibility as measured by a wide array of non-video gaming tasks. Theoretically, the results suggest that the distributed brain networks supporting cognitive flexibility can be tuned by engrossing video game experience that stresses maintenance and rapid manipulation of multiple information sources. Practically, these results suggest avenues for increasing cognitive function. PMID:23950921

  18. The Simple Video Coder: A free tool for efficiently coding social video data.

    PubMed

    Barto, Daniel; Bird, Clark W; Hamilton, Derek A; Fink, Brandi C

    2017-08-01

    Videotaping of experimental sessions is a common practice across many disciplines of psychology, ranging from clinical therapy, to developmental science, to animal research. Audio-visual data are a rich source of information that can be easily recorded; however, analysis of the recordings presents a major obstacle to project completion. Coding behavior is time-consuming and often requires ad-hoc training of a student coder. In addition, existing software is either prohibitively expensive or cumbersome, which leaves researchers with inadequate tools to quickly process video data. We offer the Simple Video Coder-free, open-source software for behavior coding that is flexible in accommodating different experimental designs, is intuitive for students to use, and produces outcome measures of event timing, frequency, and duration. Finally, the software also offers extraction tools to splice video into coded segments suitable for training future human coders or for use as input for pattern classification algorithms.

  19. Image processing for improved eye-tracking accuracy

    NASA Technical Reports Server (NTRS)

    Mulligan, J. B.; Watson, A. B. (Principal Investigator)

    1997-01-01

    Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.

  20. Video-Based Fingerprint Verification

    PubMed Central

    Qin, Wei; Yin, Yilong; Liu, Lili

    2013-01-01

    Conventional fingerprint verification systems use only static information. In this paper, fingerprint videos, which contain dynamic information, are utilized for verification. Fingerprint videos are acquired by the same capture device that acquires conventional fingerprint images, and the user experience of providing a fingerprint video is the same as that of providing a single impression. After preprocessing and aligning processes, “inside similarity” and “outside similarity” are defined and calculated to take advantage of both dynamic and static information contained in fingerprint videos. Match scores between two matching fingerprint videos are then calculated by combining the two kinds of similarity. Experimental results show that the proposed video-based method leads to a relative reduction of 60 percent in the equal error rate (EER) in comparison to the conventional single impression-based method. We also analyze the time complexity of our method when different combinations of strategies are used. Our method still outperforms the conventional method, even if both methods have the same time complexity. Finally, experimental results demonstrate that the proposed video-based method can lead to better accuracy than the multiple impressions fusion method, and the proposed method has a much lower false acceptance rate (FAR) when the false rejection rate (FRR) is quite low. PMID:24008283

  1. Prediction of transmission distortion for wireless video communication: analysis.

    PubMed

    Chen, Zhifeng; Wu, Dapeng

    2012-03-01

    Transmitting video over wireless is a challenging problem since video may be seriously distorted due to packet errors caused by wireless channels. The capability of predicting transmission distortion (i.e., video distortion caused by packet errors) can assist in designing video encoding and transmission schemes that achieve maximum video quality or minimum end-to-end video distortion. This paper is aimed at deriving formulas for predicting transmission distortion. The contribution of this paper is twofold. First, we identify the governing law that describes how the transmission distortion process evolves over time and analytically derive the transmission distortion formula as a closed-form function of video frame statistics, channel error statistics, and system parameters. Second, we identify, for the first time, two important properties of transmission distortion. The first property is that the clipping noise, which is produced by nonlinear clipping, causes decay of propagated error. The second property is that the correlation between motion-vector concealment error and propagated error is negative and has dominant impact on transmission distortion, compared with other correlations. Due to these two properties and elegant error/distortion decomposition, our formula provides not only more accurate prediction but also lower complexity than the existing methods.

  2. A web-based video annotation system for crowdsourcing surveillance videos

    NASA Astrophysics Data System (ADS)

    Gadgil, Neeraj J.; Tahboub, Khalid; Kirsh, David; Delp, Edward J.

    2014-03-01

    Video surveillance systems are of a great value to prevent threats and identify/investigate criminal activities. Manual analysis of a huge amount of video data from several cameras over a long period of time often becomes impracticable. The use of automatic detection methods can be challenging when the video contains many objects with complex motion and occlusions. Crowdsourcing has been proposed as an effective method for utilizing human intelligence to perform several tasks. Our system provides a platform for the annotation of surveillance video in an organized and controlled way. One can monitor a surveillance system using a set of tools such as training modules, roles and labels, task management. This system can be used in a real-time streaming mode to detect any potential threats or as an investigative tool to analyze past events. Annotators can annotate video contents assigned to them for suspicious activity or criminal acts. First responders are then able to view the collective annotations and receive email alerts about a newly reported incident. They can also keep track of the annotators' training performance, manage their activities and reward their success. By providing this system, the process of video analysis is made more efficient.

  3. Novel Sessile Drop Software for Quantitative Estimation of Slag Foaming in Carbon/Slag Interactions

    NASA Astrophysics Data System (ADS)

    Khanna, Rita; Rahman, Mahfuzur; Leow, Richard; Sahajwalla, Veena

    2007-08-01

    Novel video-processing software has been developed for the sessile drop technique for a rapid and quantitative estimation of slag foaming. The data processing was carried out in two stages: the first stage involved the initial transformation of digital video/audio signals into a format compatible with computing software, and the second stage involved the computation of slag droplet volume and area of contact in a chosen video frame. Experimental results are presented on slag foaming from synthetic graphite/slag system at 1550 °C. This technique can be used for determining the extent and stability of foam as a function of time.

  4. Video Guidance Sensor and Time-of-Flight Rangefinder

    NASA Technical Reports Server (NTRS)

    Bryan, Thomas; Howard, Richard; Bell, Joseph L.; Roe, Fred D.; Book, Michael L.

    2007-01-01

    A proposed video guidance sensor (VGS) would be based mostly on the hardware and software of a prior Advanced VGS (AVGS), with some additions to enable it to function as a time-of-flight rangefinder (in contradistinction to a triangulation or image-processing rangefinder). It would typically be used at distances of the order of 2 or 3 kilometers, where a typical target would appear in a video image as a single blob, making it possible to extract the direction to the target (but not the orientation of the target or the distance to the target) from a video image of light reflected from the target. As described in several previous NASA Tech Briefs articles, an AVGS system is an optoelectronic system that provides guidance for automated docking of two vehicles. In the original application, the two vehicles are spacecraft, but the basic principles of design and operation of the system are applicable to aircraft, robots, objects maneuvered by cranes, or other objects that may be required to be aligned and brought together automatically or under remote control. In a prior AVGS system of the type upon which the now-proposed VGS is largely based, the tracked vehicle is equipped with one or more passive targets that reflect light from one or more continuous-wave laser diode(s) on the tracking vehicle, a video camera on the tracking vehicle acquires images of the targets in the reflected laser light, the video images are digitized, and the image data are processed to obtain the direction to the target. The design concept of the proposed VGS does not call for any memory or processor hardware beyond that already present in the prior AVGS, but does call for some additional hardware and some additional software. It also calls for assignment of some additional tasks to two subsystems that are parts of the prior VGS: a field-programmable gate array (FPGA) that generates timing and control signals, and a digital signal processor (DSP) that processes the digitized video images. The additional timing and control signals generated by the FPGA would cause the VGS to alternate between an imaging (direction-finding) mode and a time-of-flight (range-finding mode) and would govern operation in the range-finding mode.

  5. Experimental application of simulation tools for evaluating UAV video change detection

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Bartelsen, Jan

    2015-10-01

    Change detection is one of the most important tasks when unmanned aerial vehicles (UAV) are used for video reconnaissance and surveillance. In this paper, we address changes on short time scale, i.e. the observations are taken within time distances of a few hours. Each observation is a short video sequence corresponding to the near-nadir overflight of the UAV above the interesting area and the relevant changes are e.g. recently added or removed objects. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are versatile objects like trees and compression or transmission artifacts. To enable the usage of an automatic change detection within an interactive workflow of an UAV video exploitation system, an evaluation and assessment procedure has to be performed. Large video data sets which contain many relevant objects with varying scene background and altering influence parameters (e.g. image quality, sensor and flight parameters) including image metadata and ground truth data are necessary for a comprehensive evaluation. Since the acquisition of real video data is limited by cost and time constraints, from our point of view, the generation of synthetic data by simulation tools has to be considered. In this paper the processing chain of Saur et al. (2014) [1] and the interactive workflow for video change detection is described. We have selected the commercial simulation environment Virtual Battle Space 3 (VBS3) to generate synthetic data. For an experimental setup, an example scenario "road monitoring" has been defined and several video clips have been produced with varying flight and sensor parameters and varying objects in the scene. Image registration and change mask extraction, both components of the processing chain, are applied to corresponding frames of different video clips. For the selected examples, the images could be registered, the modelled changes could be extracted and the artifacts of the image rendering considered as noise (slight differences of heading angles, disparity of vegetation, 3D parallax) could be suppressed. We conclude that these image data could be considered to be realistic enough to serve as evaluation data for the selected processing components. Future work will extend the evaluation to other influence parameters and may include the human operator for mission planning and sensor control.

  6. Automated Generation of Geo-Referenced Mosaics From Video Data Collected by Deep-Submergence Vehicles: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Rhzanov, Y.; Beaulieu, S.; Soule, S. A.; Shank, T.; Fornari, D.; Mayer, L. A.

    2005-12-01

    Many advances in understanding geologic, tectonic, biologic, and sedimentologic processes in the deep ocean are facilitated by direct observation of the seafloor. However, making such observations is both difficult and expensive. Optical systems (e.g., video, still camera, or direct observation) will always be constrained by the severe attenuation of light in the deep ocean, limiting the field of view to distances that are typically less than 10 meters. Acoustic systems can 'see' much larger areas, but at the cost of spatial resolution. Ultimately, scientists want to study and observe deep-sea processes in the same way we do land-based phenomena so that the spatial distribution and juxtaposition of processes and features can be resolved. We have begun development of algorithms that will, in near real-time, generate mosaics from video collected by deep-submergence vehicles. Mosaics consist of >>10 video frames and can cover 100's of square-meters. This work builds on a publicly available still and video mosaicking software package developed by Rzhanov and Mayer. Here we present the results of initial tests of data collection methodologies (e.g., transects across the seafloor and panoramas across features of interest), algorithm application, and GIS integration conducted during a recent cruise to the Eastern Galapagos Spreading Center (0 deg N, 86 deg W). We have developed a GIS database for the region that will act as a means to access and display mosaics within a geospatially-referenced framework. We have constructed numerous mosaics using both video and still imagery and assessed the quality of the mosaics (including registration errors) under different lighting conditions and with different navigation procedures. We have begun to develop algorithms for efficient and timely mosaicking of collected video as well as integration with navigation data for georeferencing the mosaics. Initial results indicate that operators must be properly versed in the control of the video systems as well as maintaining vehicle attitude and altitude in order to achieve the best results possible.

  7. Method and system for enabling real-time speckle processing using hardware platforms

    NASA Technical Reports Server (NTRS)

    Ortiz, Fernando E. (Inventor); Kelmelis, Eric (Inventor); Durbano, James P. (Inventor); Curt, Peterson F. (Inventor)

    2012-01-01

    An accelerator for the speckle atmospheric compensation algorithm may enable real-time speckle processing of video feeds that may enable the speckle algorithm to be applied in numerous real-time applications. The accelerator may be implemented in various forms, including hardware, software, and/or machine-readable media.

  8. Unified transform architecture for AVC, AVS, VC-1 and HEVC high-performance codecs

    NASA Astrophysics Data System (ADS)

    Dias, Tiago; Roma, Nuno; Sousa, Leonel

    2014-12-01

    A unified architecture for fast and efficient computation of the set of two-dimensional (2-D) transforms adopted by the most recent state-of-the-art digital video standards is presented in this paper. Contrasting to other designs with similar functionality, the presented architecture is supported on a scalable, modular and completely configurable processing structure. This flexible structure not only allows to easily reconfigure the architecture to support different transform kernels, but it also permits its resizing to efficiently support transforms of different orders (e.g. order-4, order-8, order-16 and order-32). Consequently, not only is it highly suitable to realize high-performance multi-standard transform cores, but it also offers highly efficient implementations of specialized processing structures addressing only a reduced subset of transforms that are used by a specific video standard. The experimental results that were obtained by prototyping several configurations of this processing structure in a Xilinx Virtex-7 FPGA show the superior performance and hardware efficiency levels provided by the proposed unified architecture for the implementation of transform cores for the Advanced Video Coding (AVC), Audio Video coding Standard (AVS), VC-1 and High Efficiency Video Coding (HEVC) standards. In addition, such results also demonstrate the ability of this processing structure to realize multi-standard transform cores supporting all the standards mentioned above and that are capable of processing the 8k Ultra High Definition Television (UHDTV) video format (7,680 × 4,320 at 30 fps) in real time.

  9. Unmanned Vehicle Guidance Using Video Camera/Vehicle Model

    NASA Technical Reports Server (NTRS)

    Sutherland, T.

    1999-01-01

    A video guidance sensor (VGS) system has flown on both STS-87 and STS-95 to validate a single camera/target concept for vehicle navigation. The main part of the image algorithm was the subtraction of two consecutive images using software. For a nominal size image of 256 x 256 pixels this subtraction can take a large portion of the time between successive frames in standard rate video leaving very little time for other computations. The purpose of this project was to integrate the software subtraction into hardware to speed up the subtraction process and allow for more complex algorithms to be performed, both in hardware and software.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shoaf, S.; APS Engineering Support Division

    A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.

  11. The student with a thousand faces: from the ethics in video games to becoming a citizen

    NASA Astrophysics Data System (ADS)

    Muñoz, Yupanqui J.; El-Hani, Charbel N.

    2012-12-01

    Video games, as technological and cultural artifacts of considerable influence in the contemporary society, play an important role in the construction of identities, just as other artifacts (e.g., books, newspapers, television) played for a long time. In this paper, we discuss this role by considering video games under two concepts, othering and technopoly, and focus on how these concepts demand that we deepen our understanding of the ethics of video games. We address here how the construction of identities within video games involves othering process, that is, processes through which, when signifying and identifying `Ourselves', we create and marginalize `Others'. Moreover, we discuss how video games can play an important role in the legitimation of the technopoly, understood as a totalitarian regime related to science, technology and their place in our societies. Under these two concepts, understanding the ethics of video games goes beyond the controversy about their violence. The main focus of discussion should lie in how the ethics of video games is related to their part in the formation of the players' citizenship. Examining several examples of electronic games, we consider how video games provide a rich experience in which the player has the opportunity to develop a practical wisdom ( phronesis), which can lead her to be a virtuous being. However, they can be also harmful to the moral experiences of the subjects when they show unethical contents related to othering processes that are not so clearly and openly condemned as violence, as in the cases of sexism, racism or xenophobia. Rather than leading us to conclude that video games needed to be banned or censored, this argument makes us highlight their role in the (science) education of critical, socially responsible, ethical, and politically active citizens, precisely because they encompass othering processes and science, technology, and society relationships.

  12. Impact of video technology on efficiency of pharmacist-provided anticoagulation counseling and patient comprehension.

    PubMed

    Moore, Sarah J; Blair, Elizabeth A; Steeb, David R; Reed, Brent N; Hull, J Heyward; Rodgers, Jo Ellen

    2015-06-01

    Discharge anticoagulation counseling is important for ensuring patient comprehension and optimizing clinical outcomes. As pharmacy resources become increasingly limited, the impact of informational videos on the counseling process becomes more relevant. To evaluate differences in pharmacist time spent counseling and patient comprehension (measured by the Oral Anticoagulation Knowledge [OAK] test) between informational videos and traditional face-to-face (oral) counseling. This prospective, open, parallel-group study at an academic medical center randomized 40 individuals-17 warfarin-naïve ("New Start") and 23 with prior warfarin use ("Restart")-to receive warfarin discharge education by video or face-to-face counseling. "Teach-back" questions were used in both groups. Although overall pharmacist time was reduced in the video counseling group (P < 0.001), an interaction between prior warfarin use and counseling method (P = 0.012) suggests the difference between counseling methods was smaller in New Start participants. Following adjustment, mean total time was reduced 8.71 (95% CI = 5.15-12.26) minutes (adjusted P < 0.001) in Restart participants and 2.31 (-2.19 to 6.81) minutes (adjusted P = 0.472) in New Start participants receiving video counseling. Postcounseling OAK test scores did not differ. Age, gender, socioeconomic status, and years of education were not predictive of total time or OAK test score. Use of informational videos coupled with teach-back questions significantly reduced pharmacist time spent on anticoagulation counseling without compromising short-term patient comprehension, primarily in patients with prior warfarin use. Study results demonstrate that video technology provides an efficient method of anticoagulation counseling while achieving similar comprehension. © The Author(s) 2015.

  13. Behavior analysis of video object in complicated background

    NASA Astrophysics Data System (ADS)

    Zhao, Wenting; Wang, Shigang; Liang, Chao; Wu, Wei; Lu, Yang

    2016-10-01

    This paper aims to achieve robust behavior recognition of video object in complicated background. Features of the video object are described and modeled according to the depth information of three-dimensional video. Multi-dimensional eigen vector are constructed and used to process high-dimensional data. Stable object tracing in complex scenes can be achieved with multi-feature based behavior analysis, so as to obtain the motion trail. Subsequently, effective behavior recognition of video object is obtained according to the decision criteria. What's more, the real-time of algorithms and accuracy of analysis are both improved greatly. The theory and method on the behavior analysis of video object in reality scenes put forward by this project have broad application prospect and important practical significance in the security, terrorism, military and many other fields.

  14. Segmentation of Pollen Tube Growth Videos Using Dynamic Bi-Modal Fusion and Seam Carving.

    PubMed

    Tambo, Asongu L; Bhanu, Bir

    2016-05-01

    The growth of pollen tubes is of significant interest in plant cell biology, as it provides an understanding of internal cell dynamics that affect observable structural characteristics such as cell diameter, length, and growth rate. However, these parameters can only be measured in experimental videos if the complete shape of the cell is known. The challenge is to accurately obtain the cell boundary in noisy video images. Usually, these measurements are performed by a scientist who manually draws regions-of-interest on the images displayed on a computer screen. In this paper, a new automated technique is presented for boundary detection by fusing fluorescence and brightfield images, and a new efficient method of obtaining the final cell boundary through the process of Seam Carving is proposed. This approach takes advantage of the nature of the fusion process and also the shape of the pollen tube to efficiently search for the optimal cell boundary. In video segmentation, the first two frames are used to initialize the segmentation process by creating a search space based on a parametric model of the cell shape. Updates to the search space are performed based on the location of past segmentations and a prediction of the next segmentation.Experimental results show comparable accuracy to a previous method, but significant decrease in processing time. This has the potential for real time applications in pollen tube microscopy.

  15. Method and Apparatus for Evaluating the Visual Quality of Processed Digital Video Sequences

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    2002-01-01

    A Digital Video Quality (DVQ) apparatus and method that incorporate a model of human visual sensitivity to predict the visibility of artifacts. The DVQ method and apparatus are used for the evaluation of the visual quality of processed digital video sequences and for adaptively controlling the bit rate of the processed digital video sequences without compromising the visual quality. The DVQ apparatus minimizes the required amount of memory and computation. The input to the DVQ apparatus is a pair of color image sequences: an original (R) non-compressed sequence, and a processed (T) sequence. Both sequences (R) and (T) are sampled, cropped, and subjected to color transformations. The sequences are then subjected to blocking and discrete cosine transformation, and the results are transformed to local contrast. The next step is a time filtering operation which implements the human sensitivity to different time frequencies. The results are converted to threshold units by dividing each discrete cosine transform coefficient by its respective visual threshold. At the next stage the two sequences are subtracted to produce an error sequence. The error sequence is subjected to a contrast masking operation, which also depends upon the reference sequence (R). The masked errors can be pooled in various ways to illustrate the perceptual error over various dimensions, and the pooled error can be converted to a visual quality measure.

  16. Autonomous target tracking of UAVs based on low-power neural network hardware

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Jin, Zhanpeng; Thiem, Clare; Wysocki, Bryant; Shen, Dan; Chen, Genshe

    2014-05-01

    Detecting and identifying targets in unmanned aerial vehicle (UAV) images and videos have been challenging problems due to various types of image distortion. Moreover, the significantly high processing overhead of existing image/video processing techniques and the limited computing resources available on UAVs force most of the processing tasks to be performed by the ground control station (GCS) in an off-line manner. In order to achieve fast and autonomous target identification on UAVs, it is thus imperative to investigate novel processing paradigms that can fulfill the real-time processing requirements, while fitting the size, weight, and power (SWaP) constrained environment. In this paper, we present a new autonomous target identification approach on UAVs, leveraging the emerging neuromorphic hardware which is capable of massively parallel pattern recognition processing and demands only a limited level of power consumption. A proof-of-concept prototype was developed based on a micro-UAV platform (Parrot AR Drone) and the CogniMemTMneural network chip, for processing the video data acquired from a UAV camera on the y. The aim of this study was to demonstrate the feasibility and potential of incorporating emerging neuromorphic hardware into next-generation UAVs and their superior performance and power advantages towards the real-time, autonomous target tracking.

  17. Improved segmentation of occluded and adjoining vehicles in traffic surveillance videos

    NASA Astrophysics Data System (ADS)

    Juneja, Medha; Grover, Priyanka

    2013-12-01

    Occlusion in image processing refers to concealment of any part of the object or the whole object from view of an observer. Real time videos captured by static cameras on roads often encounter overlapping and hence, occlusion of vehicles. Occlusion in traffic surveillance videos usually occurs when an object which is being tracked is hidden by another object. This makes it difficult for the object detection algorithms to distinguish all the vehicles efficiently. Also morphological operations tend to join the close proximity vehicles resulting in formation of a single bounding box around more than one vehicle. Such problems lead to errors in further video processing, like counting of vehicles in a video. The proposed system brings forward efficient moving object detection and tracking approach to reduce such errors. The paper uses successive frame subtraction technique for detection of moving objects. Further, this paper implements the watershed algorithm to segment the overlapped and adjoining vehicles. The segmentation results have been improved by the use of noise and morphological operations.

  18. Processor core for real time background identification of HD video based on OpenCV Gaussian mixture model algorithm

    NASA Astrophysics Data System (ADS)

    Genovese, Mariangela; Napoli, Ettore

    2013-05-01

    The identification of moving objects is a fundamental step in computer vision processing chains. The development of low cost and lightweight smart cameras steadily increases the request of efficient and high performance circuits able to process high definition video in real time. The paper proposes two processor cores aimed to perform the real time background identification on High Definition (HD, 1920 1080 pixel) video streams. The implemented algorithm is the OpenCV version of the Gaussian Mixture Model (GMM), an high performance probabilistic algorithm for the segmentation of the background that is however computationally intensive and impossible to implement on general purpose CPU with the constraint of real time processing. In the proposed paper, the equations of the OpenCV GMM algorithm are optimized in such a way that a lightweight and low power implementation of the algorithm is obtained. The reported performances are also the result of the use of state of the art truncated binary multipliers and ROM compression techniques for the implementation of the non-linear functions. The first circuit has commercial FPGA devices as a target and provides speed and logic resource occupation that overcome previously proposed implementations. The second circuit is oriented to an ASIC (UMC-90nm) standard cell implementation. Both implementations are able to process more than 60 frames per second in 1080p format, a frame rate compatible with HD television.

  19. Development of Targeting UAVs Using Electric Helicopters and Yamaha RMAX

    DTIC Science & Technology

    2007-05-17

    including the QNX real - time operating system . The video overlay board is useful to display the onboard camera’s image with important information such as... real - time operating system . Fully utilizing the built-in multi-processing architecture with inter-process synchronization and communication

  20. A semi-automatic annotation tool for cooking video

    NASA Astrophysics Data System (ADS)

    Bianco, Simone; Ciocca, Gianluigi; Napoletano, Paolo; Schettini, Raimondo; Margherita, Roberto; Marini, Gianluca; Gianforme, Giorgio; Pantaleo, Giuseppe

    2013-03-01

    In order to create a cooking assistant application to guide the users in the preparation of the dishes relevant to their profile diets and food preferences, it is necessary to accurately annotate the video recipes, identifying and tracking the foods of the cook. These videos present particular annotation challenges such as frequent occlusions, food appearance changes, etc. Manually annotate the videos is a time-consuming, tedious and error-prone task. Fully automatic tools that integrate computer vision algorithms to extract and identify the elements of interest are not error free, and false positive and false negative detections need to be corrected in a post-processing stage. We present an interactive, semi-automatic tool for the annotation of cooking videos that integrates computer vision techniques under the supervision of the user. The annotation accuracy is increased with respect to completely automatic tools and the human effort is reduced with respect to completely manual ones. The performance and usability of the proposed tool are evaluated on the basis of the time and effort required to annotate the same video sequences.

  1. Exploiting semantics for sensor re-calibration in event detection systems

    NASA Astrophysics Data System (ADS)

    Vaisenberg, Ronen; Ji, Shengyue; Hore, Bijit; Mehrotra, Sharad; Venkatasubramanian, Nalini

    2008-01-01

    Event detection from a video stream is becoming an important and challenging task in surveillance and sentient systems. While computer vision has been extensively studied to solve different kinds of detection problems over time, it is still a hard problem and even in a controlled environment only simple events can be detected with a high degree of accuracy. Instead of struggling to improve event detection using image processing only, we bring in semantics to direct traditional image processing. Semantics are the underlying facts that hide beneath video frames, which can not be "seen" directly by image processing. In this work we demonstrate that time sequence semantics can be exploited to guide unsupervised re-calibration of the event detection system. We present an instantiation of our ideas by using an appliance as an example--Coffee Pot level detection based on video data--to show that semantics can guide the re-calibration of the detection model. This work exploits time sequence semantics to detect when re-calibration is required to automatically relearn a new detection model for the newly evolved system state and to resume monitoring with a higher rate of accuracy.

  2. Access NASA Satellite Global Precipitation Data Visualization on YouTube

    NASA Astrophysics Data System (ADS)

    Liu, Z.; Su, J.; Acker, J. G.; Huffman, G. J.; Vollmer, B.; Wei, J.; Meyer, D. J.

    2017-12-01

    Since the satellite era began, NASA has collected a large volume of Earth science observations for research and applications around the world. Satellite data at 12 NASA data centers can also be used for STEM activities such as disaster events, climate change, etc. However, accessing satellite data can be a daunting task for non-professional users such as teachers and students because of unfamiliarity of terminology, disciplines, data formats, data structures, computing resources, processing software, programing languages, etc. Over the years, many efforts have been developed to improve satellite data access, but barriers still exist for non-professionals. In this presentation, we will present our latest activity that uses the popular online video sharing web site, YouTube, to access visualization of global precipitation datasets at the NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC). With YouTube, users can access and visualize a large volume of satellite data without necessity to learn new software or download data. The dataset in this activity is the 3-hourly TRMM (Tropical Rainfall Measuring Mission) Multi-satellite Precipitation Analysis (TMPA). The video consists of over 50,000 data files collected since 1998 onwards, covering a zone between 50°N-S. The YouTube video will last 36 minutes for the entire dataset record (over 19 years). Since the time stamp is on each frame of the video, users can begin at any time by dragging the time progress bar. This precipitation animation will allow viewing precipitation events and processes (e.g., hurricanes, fronts, atmospheric rivers, etc.) on a global scale. The next plan is to develop a similar animation for the GPM (Global Precipitation Measurement) Integrated Multi-satellitE Retrievals for GPM (IMERG). The IMERG provides precipitation on a near-global (60°N-S) coverage at half-hourly time interval, showing more details on precipitation processes and development, compared to the 3-hourly TMPA product. The entire video will contain more than 330,000 files and will last 3.6 hours. Future plans include development of fly-over videos for orbital data for an entire satellite mission or project. All videos will be uploaded and available at the GES DISC site on YouTube (https://www.youtube.com/user/NASAGESDISC).

  3. Design of video processing and testing system based on DSP and FPGA

    NASA Astrophysics Data System (ADS)

    Xu, Hong; Lv, Jun; Chen, Xi'ai; Gong, Xuexia; Yang, Chen'na

    2007-12-01

    Based on high speed Digital Signal Processor (DSP) and Field Programmable Gate Array (FPGA), a video capture, processing and display system is presented, which is of miniaturization and low power. In this system, a triple buffering scheme was used for the capture and display, so that the application can always get a new buffer without waiting; The Digital Signal Processor has an image process ability and it can be used to test the boundary of workpiece's image. A video graduation technology is used to aim at the position which is about to be tested, also, it can enhance the system's flexibility. The character superposition technology realized by DSP is used to display the test result on the screen in character format. This system can process image information in real time, ensure test precision, and help to enhance product quality and quality management.

  4. Algorithm-Based Motion Magnification for Video Processing in Urological Laparoscopy.

    PubMed

    Adams, Fabian; Schoelly, Reto; Schlager, Daniel; Schoenthaler, Martin; Schoeb, Dominik S; Wilhelm, Konrad; Hein, Simon; Wetterauer, Ulrich; Miernik, Arkadiusz

    2017-06-01

    Minimally invasive surgery is in constant further development and has replaced many conventional operative procedures. If vascular structure movement could be detected during these procedures, it could reduce the risk of vascular injury and conversion to open surgery. The recently proposed motion-amplifying algorithm, Eulerian Video Magnification (EVM), has been shown to substantially enhance minimal object changes in digitally recorded video that is barely perceptible to the human eye. We adapted and examined this technology for use in urological laparoscopy. Video sequences of routine urological laparoscopic interventions were recorded and further processed using spatial decomposition and filtering algorithms. The freely available EVM algorithm was investigated for its usability in real-time processing. In addition, a new image processing technology, the CRS iimotion Motion Magnification (CRSMM) algorithm, was specifically adjusted for endoscopic requirements, applied, and validated by our working group. Using EVM, no significant motion enhancement could be detected without severe impairment of the image resolution, motion, and color presentation. The CRSMM algorithm significantly improved image quality in terms of motion enhancement. In particular, the pulsation of vascular structures could be displayed more accurately than in EVM. Motion magnification image processing technology has the potential for clinical importance as a video optimizing modality in endoscopic and laparoscopic surgery. Barely detectable (micro)movements can be visualized using this noninvasive marker-free method. Despite these optimistic results, the technology requires considerable further technical development and clinical tests.

  5. Video-processing-based system for automated pedestrian data collection and analysis when crossing the street

    NASA Astrophysics Data System (ADS)

    Mansouri, Nabila; Watelain, Eric; Ben Jemaa, Yousra; Motamed, Cina

    2018-03-01

    Computer-vision techniques for pedestrian detection and tracking have progressed considerably and become widely used in several applications. However, a quick glance at the literature shows a minimal use of these techniques in pedestrian behavior and safety analysis, which might be due to the technical complexities facing the processing of pedestrian videos. To extract pedestrian trajectories from a video automatically, all road users must be detected and tracked during sequences, which is a challenging task, especially in a congested open-outdoor urban space. A multipedestrian tracker based on an interframe-detection-association process was proposed and evaluated. The tracker results are used to implement an automatic tool for pedestrians data collection when crossing the street based on video processing. The variations in the instantaneous speed allowed the detection of the street crossing phases (approach, waiting, and crossing). These were addressed for the first time in the pedestrian road security analysis to illustrate the causal relationship between pedestrian behaviors in the different phases. A comparison with a manual data collection method, by computing the root mean square error and the Pearson correlation coefficient, confirmed that the procedures proposed have significant potential to automate the data collection process.

  6. PixonVision real-time video processor

    NASA Astrophysics Data System (ADS)

    Puetter, R. C.; Hier, R. G.

    2007-09-01

    PixonImaging LLC and DigiVision, Inc. have developed a real-time video processor, the PixonVision PV-200, based on the patented Pixon method for image deblurring and denoising, and DigiVision's spatially adaptive contrast enhancement processor, the DV1000. The PV-200 can process NTSC and PAL video in real time with a latency of 1 field (1/60 th of a second), remove the effects of aerosol scattering from haze, mist, smoke, and dust, improve spatial resolution by up to 2x, decrease noise by up to 6x, and increase local contrast by up to 8x. A newer version of the processor, the PV-300, is now in prototype form and can handle high definition video. Both the PV-200 and PV-300 are FPGA-based processors, which could be spun into ASICs if desired. Obvious applications of these processors include applications in the DOD (tanks, aircraft, and ships), homeland security, intelligence, surveillance, and law enforcement. If developed into an ASIC, these processors will be suitable for a variety of portable applications, including gun sights, night vision goggles, binoculars, and guided munitions. This paper presents a variety of examples of PV-200 processing, including examples appropriate to border security, battlefield applications, port security, and surveillance from unmanned aerial vehicles.

  7. Simultaneous compression and encryption for secure real-time secure transmission of sensitive video transmission

    NASA Astrophysics Data System (ADS)

    Al-Hayani, Nazar; Al-Jawad, Naseer; Jassim, Sabah A.

    2014-05-01

    Video compression and encryption became very essential in a secured real time video transmission. Applying both techniques simultaneously is one of the challenges where the size and the quality are important in multimedia transmission. In this paper we proposed a new technique for video compression and encryption. Both encryption and compression are based on edges extracted from the high frequency sub-bands of wavelet decomposition. The compression algorithm based on hybrid of: discrete wavelet transforms, discrete cosine transform, vector quantization, wavelet based edge detection, and phase sensing. The compression encoding algorithm treats the video reference and non-reference frames in two different ways. The encryption algorithm utilized A5 cipher combined with chaotic logistic map to encrypt the significant parameters and wavelet coefficients. Both algorithms can be applied simultaneously after applying the discrete wavelet transform on each individual frame. Experimental results show that the proposed algorithms have the following features: high compression, acceptable quality, and resistance to the statistical and bruteforce attack with low computational processing.

  8. An openstack-based flexible video transcoding framework in live

    NASA Astrophysics Data System (ADS)

    Shi, Qisen; Song, Jianxin

    2017-08-01

    With the rapid development of mobile live business, transcoding HD video is often a challenge for mobile devices due to their limited processing capability and bandwidth-constrained network connection. For live service providers, it's wasteful for resources to delay lots of transcoding server because some of them are free to work sometimes. To deal with this issue, this paper proposed an Openstack-based flexible transcoding framework to achieve real-time video adaption for mobile device and make computing resources used efficiently. To this end, we introduced a special method of video stream splitting and VMs resource scheduling based on access pressure prediction,which is forecasted by an AR model.

  9. Evaluating video digitizer errors

    NASA Astrophysics Data System (ADS)

    Peterson, C.

    2016-01-01

    Analog output video cameras remain popular for recording meteor data. Although these cameras uniformly employ electronic detectors with fixed pixel arrays, the digitization process requires resampling the horizontal lines as they are output in order to reconstruct the pixel data, usually resulting in a new data array of different horizontal dimensions than the native sensor. Pixel timing is not provided by the camera, and must be reconstructed based on line sync information embedded in the analog video signal. Using a technique based on hot pixels, I present evidence that jitter, sync detection, and other timing errors introduce both position and intensity errors which are not present in cameras which internally digitize their sensors and output the digital data directly.

  10. Adult Spinal Deformity Patients Recall Fewer Than 50% of the Risks Discussed in the Informed Consent Process Preoperatively and the Recall Rate Worsens Significantly in the Postoperative Period.

    PubMed

    Saigal, Rajiv; Clark, Aaron J; Scheer, Justin K; Smith, Justin S; Bess, Shay; Mummaneni, Praveen V; McCarthy, Ian M; Hart, Robert A; Kebaish, Khaled M; Klineberg, Eric O; Deviren, Vedat; Schwab, Frank; Shaffrey, Christopher I; Ames, Christopher P

    2015-07-15

    Recall of the informed consent process in patients undergoing adult spinal deformity surgery and their family members was investigated prospectively. To quantify the percentage recall of the most common complications discussed during the informed consent process in adult spinal deformity surgery, assess for differences between patients and family members, and correlate with mental status. Given high rates of complications in adult spinal deformity surgery, it is critical to shared decision making that patients are adequately informed about risks and are able to recall preoperative discussion of possible complications to mitigate medical legal risk. Patients undergoing adult spinal deformity surgery underwent an augmented informed consent process involving both verbal and video explanations. Recall of the 11 most common complications was scored. Mental status was assessed with the mini-mental status examination-brief version. Patients subjectively scored the informed consent process and video. After surgery, the recall test and mini-mental status examination-brief version were readministered at 5 additional time points: hospital discharge, 6 to 8 weeks, 3 months, 6 months, and 1 year postoperatively. Family members were assessed at the first 3 time points for comparison. Fifty-six patients enrolled. Despite ranking the consent process as important (median overall score: 10/10; video score: 9/10), median patient recall was only 45% immediately after discussion and video re-enforcement and subsequently declined to 18% at 6 to 8 weeks and 1 year postoperatively. Median family recall trended higher at 55% immediately and 36% at 6 to 8 weeks postoperatively. The perception of the severity of complications significantly differs between patient and surgeon. Mental status scores showed a transient, significant decrease from preoperation to discharge but were significantly higher at 1 year. Despite being well-informed in an optimized informed consent process, patients cannot recall most surgical risks discussed and recall declines over time. Significant progress remains to improve informed consent retention. 3.

  11. The development of attention skills in action video game players

    PubMed Central

    Dye, M.W.G.; Green, C.S.; Bavelier, D.

    2009-01-01

    Previous research suggests that action video game play improves attentional resources, allowing gamers to better allocate their attention across both space and time. In order to further characterize the plastic changes resulting from playing these video games, we administered the Attentional Network Test (ANT) to action game players and non-playing controls aged between 7 and 22 years. By employing a mixture of cues and flankers, the ANT provides measures of how well attention is allocated to targets as a function of alerting and orienting cues, and to what extent observers are able to filter out the influence of task irrelevant information flanking those targets. The data suggest that action video game players of all ages have enhanced attentional skills that allow them to make faster correct responses to targets, and leaves additional processing resources that spill over to process distractors flanking the targets. PMID:19428410

  12. Analyzing workplace exposures using direct reading instruments and video exposure monitoring techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gressel, M.G.; Heitbrink, W.A.; Jensen, P.A.

    1992-08-01

    The techniques for conducting video exposure monitoring were described along with the equipment required to monitor and record worker breathing zone concentrations, the analysis of the real time exposure data using video recordings, and the use of real time concentration data from a direct reading instrument to determine the effective ventilation rate and the mixing factor of a given room at a specific time. Case studies which made use of video exposure monitoring techniques to provide information not available through integrated sampling were also discussed. The process being monitored and the methodology used to monitor the exposures were described formore » each of the case studies. The case studies included manual material weigh out, ceramic casting cleaning, dumping bags of powdered materials, furniture stripping, administration of nitrous-oxide during dental procedures, hand held sanding operation, methanol exposures in maintenance garages, brake servicing, bulk loading of railroad cars and trucks, and grinding operations.« less

  13. A real-time inverse quantised transform for multi-standard with dynamic resolution support

    NASA Astrophysics Data System (ADS)

    Sun, Chi-Chia; Lin, Chun-Ying; Zhang, Ce

    2016-06-01

    In this paper, a real-time configurable intelligent property (IP) core is presented for image/video decoding process in compatibility with the standard MPEG-4 Visual and the standard H.264/AVC. The inverse quantised discrete cosine and integer transform can be used to perform inverse quantised discrete cosine transform and inverse quantised inverse integer transforms which only required shift and add operations. Meanwhile, COordinate Rotation DIgital Computer iterations and compensation steps are adjustable in order to compensate for the video compression quality regarding various data throughput. The implementations are embedded in publicly available software XVID Codes 1.2.2 for the standard MPEG-4 Visual and the H.264/AVC reference software JM 16.1, where the experimental results show that the balance between the computational complexity and video compression quality is retained. At the end, FPGA synthesised results show that the proposed IP core can bring advantages to low hardware costs and also provide real-time performance for Full HD and 4K-2K video decoding.

  14. Feedback in formative OSCEs: comparison between direct observation and video-based formats

    PubMed Central

    Junod Perron, Noëlle; Louis-Simonet, Martine; Cerutti, Bernard; Pfarrwaller, Eva; Sommer, Johanna; Nendaz, Mathieu

    2016-01-01

    Introduction Medical students at the Faculty of Medicine, University of Geneva, Switzerland, have the opportunity to practice clinical skills with simulated patients during formative sessions in preparation for clerkships. These sessions are given in two formats: 1) direct observation of an encounter followed by verbal feedback (direct feedback) and 2) subsequent review of the videotaped encounter by both student and supervisor (video-based feedback). The aim of the study was to evaluate whether content and process of feedback differed between both formats. Methods In 2013, all second- and third-year medical students and clinical supervisors involved in formative sessions were asked to take part in the study. A sample of audiotaped feedback sessions involving supervisors who gave feedback in both formats were analyzed (content and process of the feedback) using a 21-item feedback scale. Results Forty-eight audiotaped feedback sessions involving 12 supervisors were analyzed (2 direct and 2 video-based sessions per supervisor). When adjusted for the length of feedback, there were significant differences in terms of content and process between both formats; the number of communication skills and clinical reasoning items addressed were higher in the video-based format (11.29 vs. 7.71, p=0.002 and 3.71 vs. 2.04, p=0.010, respectively). Supervisors engaged students more actively during the video-based sessions than during direct feedback sessions (self-assessment: 4.00 vs. 3.17, p=0.007; active problem-solving: 3.92 vs. 3.42, p=0.009). Students made similar observations and tended to consider that the video feedback was more useful for improving some clinical skills. Conclusion Video-based feedback facilitates discussion of clinical reasoning, communication, and professionalism issues while at the same time actively engaging students. Different time and conceptual frameworks may explain observed differences. The choice of feedback format should depend on the educational goal. PMID:27834170

  15. Feedback in formative OSCEs: comparison between direct observation and video-based formats.

    PubMed

    Junod Perron, Noëlle; Louis-Simonet, Martine; Cerutti, Bernard; Pfarrwaller, Eva; Sommer, Johanna; Nendaz, Mathieu

    2016-01-01

    Medical students at the Faculty of Medicine, University of Geneva, Switzerland, have the opportunity to practice clinical skills with simulated patients during formative sessions in preparation for clerkships. These sessions are given in two formats: 1) direct observation of an encounter followed by verbal feedback (direct feedback) and 2) subsequent review of the videotaped encounter by both student and supervisor (video-based feedback). The aim of the study was to evaluate whether content and process of feedback differed between both formats. In 2013, all second- and third-year medical students and clinical supervisors involved in formative sessions were asked to take part in the study. A sample of audiotaped feedback sessions involving supervisors who gave feedback in both formats were analyzed (content and process of the feedback) using a 21-item feedback scale. Forty-eight audiotaped feedback sessions involving 12 supervisors were analyzed (2 direct and 2 video-based sessions per supervisor). When adjusted for the length of feedback, there were significant differences in terms of content and process between both formats; the number of communication skills and clinical reasoning items addressed were higher in the video-based format (11.29 vs. 7.71, p= 0.002 and 3.71 vs. 2.04, p= 0.010, respectively). Supervisors engaged students more actively during the video-based sessions than during direct feedback sessions (self-assessment: 4.00 vs. 3.17, p= 0.007; active problem-solving: 3.92 vs. 3.42, p= 0.009). Students made similar observations and tended to consider that the video feedback was more useful for improving some clinical skills. Video-based feedback facilitates discussion of clinical reasoning, communication, and professionalism issues while at the same time actively engaging students. Different time and conceptual frameworks may explain observed differences. The choice of feedback format should depend on the educational goal.

  16. Enumeration versus multiple object tracking: the case of action video game players

    PubMed Central

    Green, C.S.; Bavelier, D.

    2010-01-01

    Here, we demonstrate that action video game play enhances subjects’ ability in two tasks thought to indicate the number of items that can be apprehended. Using an enumeration task, in which participants have to determine the number of quickly flashed squares, accuracy measures showed a near ceiling performance for low numerosities and a sharp drop in performance once a critical number of squares was reached. Importantly, this critical number was higher by about two items in video game players (VGPs) than in non-video game players (NVGPs). A following control study indicated that this improvement was not due to an enhanced ability to instantly apprehend the numerosity of the display, a process known as subitizing, but rather due to an enhancement in the slower more serial process of counting. To confirm that video game play facilitates the processing of multiple objects at once, we compared VGPs and NVGPs on the multiple object tracking task (MOT), which requires the allocation of attention to several items over time. VGPs were able to successfully track approximately two more items than NVGPs. Furthermore, NVGPs trained on an action video game established the causal effect of game playing in the enhanced performance on the two tasks. Together, these studies confirm the view that playing action video games enhances the number of objects that can be apprehended and suggest that this enhancement is mediated by changes in visual short-term memory skills. PMID:16359652

  17. Enumeration versus multiple object tracking: the case of action video game players.

    PubMed

    Green, C S; Bavelier, D

    2006-08-01

    Here, we demonstrate that action video game play enhances subjects' ability in two tasks thought to indicate the number of items that can be apprehended. Using an enumeration task, in which participants have to determine the number of quickly flashed squares, accuracy measures showed a near ceiling performance for low numerosities and a sharp drop in performance once a critical number of squares was reached. Importantly, this critical number was higher by about two items in video game players (VGPs) than in non-video game players (NVGPs). A following control study indicated that this improvement was not due to an enhanced ability to instantly apprehend the numerosity of the display, a process known as subitizing, but rather due to an enhancement in the slower more serial process of counting. To confirm that video game play facilitates the processing of multiple objects at once, we compared VGPs and NVGPs on the multiple object tracking task (MOT), which requires the allocation of attention to several items over time. VGPs were able to successfully track approximately two more items than NVGPs. Furthermore, NVGPs trained on an action video game established the causal effect of game playing in the enhanced performance on the two tasks. Together, these studies confirm the view that playing action video games enhances the number of objects that can be apprehended and suggest that this enhancement is mediated by changes in visual short-term memory skills.

  18. Efficient Use of Video for 3d Modelling of Cultural Heritage Objects

    NASA Astrophysics Data System (ADS)

    Alsadik, B.; Gerke, M.; Vosselman, G.

    2015-03-01

    Currently, there is a rapid development in the techniques of the automated image based modelling (IBM), especially in advanced structure-from-motion (SFM) and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 - 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.

  19. Near real-time, on-the-move software PED using VPEF

    NASA Astrophysics Data System (ADS)

    Green, Kevin; Geyer, Chris; Burnette, Chris; Agarwal, Sanjeev; Swett, Bruce; Phan, Chung; Deterline, Diane

    2015-05-01

    The scope of the Micro-Cloud for Operational, Vehicle-Based EO-IR Reconnaissance System (MOVERS) development effort, managed by the Night Vision and Electronic Sensors Directorate (NVESD), is to develop, integrate, and demonstrate new sensor technologies and algorithms that improve improvised device/mine detection using efficient and effective exploitation and fusion of sensor data and target cues from existing and future Route Clearance Package (RCP) sensor systems. Unfortunately, the majority of forward looking Full Motion Video (FMV) and computer vision processing, exploitation, and dissemination (PED) algorithms are often developed using proprietary, incompatible software. This makes the insertion of new algorithms difficult due to the lack of standardized processing chains. In order to overcome these limitations, EOIR developed the Government off-the-shelf (GOTS) Video Processing and Exploitation Framework (VPEF) to be able to provide standardized interfaces (e.g., input/output video formats, sensor metadata, and detected objects) for exploitation software and to rapidly integrate and test computer vision algorithms. EOIR developed a vehicle-based computing framework within the MOVERS and integrated it with VPEF. VPEF was further enhanced for automated processing, detection, and publishing of detections in near real-time, thus improving the efficiency and effectiveness of RCP sensor systems.

  20. Video Tutorial of Continental Food

    NASA Astrophysics Data System (ADS)

    Nurani, A. S.; Juwaedah, A.; Mahmudatussa'adah, A.

    2018-02-01

    This research is motivated by the belief in the importance of media in a learning process. Media as an intermediary serves to focus on the attention of learners. Selection of appropriate learning media is very influential on the success of the delivery of information itself both in terms of cognitive, affective and skills. Continental food is a course that studies food that comes from Europe and is very complex. To reduce verbalism and provide more real learning, then the tutorial media is needed. Media tutorials that are audio visual can provide a more concrete learning experience. The purpose of this research is to develop tutorial media in the form of video. The method used is the development method with the stages of analyzing the learning objectives, creating a story board, validating the story board, revising the story board and making video tutorial media. The results show that the making of storyboards should be very thorough, and detailed in accordance with the learning objectives to reduce errors in video capture so as to save time, cost and effort. In video capturing, lighting, shooting angles, and soundproofing make an excellent contribution to the quality of tutorial video produced. In shooting should focus more on tools, materials, and processing. Video tutorials should be interactive and two-way.

  1. Video change detection for fixed wing UAVs

    NASA Astrophysics Data System (ADS)

    Bartelsen, Jan; Müller, Thomas; Ring, Jochen; Mück, Klaus; Brüstle, Stefan; Erdnüß, Bastian; Lutz, Bastian; Herbst, Theresa

    2017-10-01

    In this paper we proceed the work of Bartelsen et al.1 We present the draft of a process chain for an image based change detection which is designed for videos acquired by fixed wing unmanned aerial vehicles (UAVs). From our point of view, automatic video change detection for aerial images can be useful to recognize functional activities which are typically caused by the deployment of improvised explosive devices (IEDs), e.g. excavations, skid marks, footprints, left-behind tooling equipment, and marker stones. Furthermore, in case of natural disasters, like flooding, imminent danger can be recognized quickly. Due to the necessary flight range, we concentrate on fixed wing UAVs. Automatic change detection can be reduced to a comparatively simple photogrammetric problem when the perspective change between the "before" and "after" image sets is kept as small as possible. Therefore, the aerial image acquisition demands a mission planning with a clear purpose including flight path and sensor configuration. While the latter can be enabled simply by a fixed and meaningful adjustment of the camera, ensuring a small perspective change for "before" and "after" videos acquired by fixed wing UAVs is a challenging problem. Concerning this matter, we have performed tests with an advanced commercial off the shelf (COTS) system which comprises a differential GPS and autopilot system estimating the repetition accuracy of its trajectory. Although several similar approaches have been presented,23 as far as we are able to judge, the limits for this important issue are not estimated so far. Furthermore, we design a process chain to enable the practical utilization of video change detection. It consists of a front-end of a database to handle large amounts of video data, an image processing and change detection implementation, and the visualization of the results. We apply our process chain on the real video data acquired by the advanced COTS fixed wing UAV and synthetic data. For the image processing and change detection, we use the approach of Muller.4 Although it was developed for unmanned ground vehicles (UGVs), it enables a near real time video change detection for aerial videos. Concluding, we discuss the demands on sensor systems in the matter of change detection.

  2. Real-time processing of dual band HD video for maintaining operational effectiveness in degraded visual environments

    NASA Astrophysics Data System (ADS)

    Parker, Steve C. J.; Hickman, Duncan L.; Smith, Moira I.

    2015-05-01

    Effective reconnaissance, surveillance and situational awareness, using dual band sensor systems, require the extraction, enhancement and fusion of salient features, with the processed video being presented to the user in an ergonomic and interpretable manner. HALO™ is designed to meet these requirements and provides an affordable, real-time, and low-latency image fusion solution on a low size, weight and power (SWAP) platform. The system has been progressively refined through field trials to increase its operating envelope and robustness. The result is a video processor that improves detection, recognition and identification (DRI) performance, whilst lowering operator fatigue and reaction times in complex and highly dynamic situations. This paper compares the performance of HALO™, both qualitatively and quantitatively, with conventional blended fusion for operation in degraded visual environments (DVEs), such as those experienced during ground and air-based operations. Although image blending provides a simple fusion solution, which explains its common adoption, the results presented demonstrate that its performance is poor compared to the HALO™ fusion scheme in DVE scenarios.

  3. Technical and economic feasibility of integrated video service by satellite

    NASA Technical Reports Server (NTRS)

    Price, Kent M.; Garlow, R. K.; Henderson, T. R.; Kwan, Robert K.; White, L. W.

    1992-01-01

    The trends and roles of satellite based video services in the year 2010 time frame are examined based on an overall network and service model for that period. Emphasis is placed on point to point and multipoint service, but broadcast could also be accommodated. An estimate of the video traffic is made and the service and general network requirements are identified. User charges are then estimated based on several usage scenarios. In order to accommodate these traffic needs, a 28 spot beam satellite architecture with on-board processing and signal mixing is suggested.

  4. ACE: Automatic Centroid Extractor for real time target tracking

    NASA Technical Reports Server (NTRS)

    Cameron, K.; Whitaker, S.; Canaris, J.

    1990-01-01

    A high performance video image processor has been implemented which is capable of grouping contiguous pixels from a raster scan image into groups and then calculating centroid information for each object in a frame. The algorithm employed to group pixels is very efficient and is guaranteed to work properly for all convex shapes as well as most concave shapes. Processing speeds are adequate for real time processing of video images having a pixel rate of up to 20 million pixels per second. Pixels may be up to 8 bits wide. The processor is designed to interface directly to a transputer serial link communications channel with no additional hardware. The full custom VLSI processor was implemented in a 1.6 mu m CMOS process and measures 7200 mu m on a side.

  5. Opinion Mining for Educational Video Lectures.

    PubMed

    Kravvaris, Dimitrios; Kermanidis, Katia Lida

    2017-01-01

    The search for relevant educational videos is a time consuming process for the users. Furthermore, the increasing demand for educational videos intensifies the problem and calls for the users to utilize whichever information is offered by the hosting web pages, and choose the most appropriate one. This research focuses on the classification of user views, based on the comments on educational videos, into positive or negative ones. The aim is to give users a picture of the positive and negative comments that have been recorded, so as to provide a qualitative view of the final selection at their disposal. The present paper's innovation is the automatic identification of the most important words of the verbal content of the video lectures and the filtering of the comments based on them, thus limiting the comments to the ones that have a substantial semantic connection with the video content.

  6. A generic flexible and robust approach for intelligent real-time video-surveillance systems

    NASA Astrophysics Data System (ADS)

    Desurmont, Xavier; Delaigle, Jean-Francois; Bastide, Arnaud; Macq, Benoit

    2004-05-01

    In this article we present a generic, flexible and robust approach for an intelligent real-time video-surveillance system. A previous version of the system was presented in [1]. The goal of these advanced tools is to provide help to operators by detecting events of interest in visual scenes and highlighting alarms and compute statistics. The proposed system is a multi-camera platform able to handle different standards of video inputs (composite, IP, IEEE1394 ) and which can basically compress (MPEG4), store and display them. This platform also integrates advanced video analysis tools, such as motion detection, segmentation, tracking and interpretation. The design of the architecture is optimised to playback, display, and process video flows in an efficient way for video-surveillance application. The implementation is distributed on a scalable computer cluster based on Linux and IP network. It relies on POSIX threads for multitasking scheduling. Data flows are transmitted between the different modules using multicast technology and under control of a TCP-based command network (e.g. for bandwidth occupation control). We report here some results and we show the potential use of such a flexible system in third generation video surveillance system. We illustrate the interest of the system in a real case study, which is the indoor surveillance.

  7. Video sensor architecture for surveillance applications.

    PubMed

    Sánchez, Jordi; Benet, Ginés; Simó, José E

    2012-01-01

    This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software) in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%.

  8. Video Sensor Architecture for Surveillance Applications

    PubMed Central

    Sánchez, Jordi; Benet, Ginés; Simó, José E.

    2012-01-01

    This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software) in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%. PMID:22438723

  9. A discriminative structural similarity measure and its application to video-volume registration for endoscope three-dimensional motion tracking.

    PubMed

    Luo, Xiongbiao; Mori, Kensaku

    2014-06-01

    Endoscope 3-D motion tracking, which seeks to synchronize pre- and intra-operative images in endoscopic interventions, is usually performed as video-volume registration that optimizes the similarity between endoscopic video and pre-operative images. The tracking performance, in turn, depends significantly on whether a similarity measure can successfully characterize the difference between video sequences and volume rendering images driven by pre-operative images. The paper proposes a discriminative structural similarity measure, which uses the degradation of structural information and takes image correlation or structure, luminance, and contrast into consideration, to boost video-volume registration. By applying the proposed similarity measure to endoscope tracking, it was demonstrated to be more accurate and robust than several available similarity measures, e.g., local normalized cross correlation, normalized mutual information, modified mean square error, or normalized sum squared difference. Based on clinical data evaluation, the tracking error was reduced significantly from at least 14.6 mm to 4.5 mm. The processing time was accelerated more than 30 frames per second using graphics processing unit.

  10. Multi-star processing and gyro filtering for the video inertial pointing system

    NASA Technical Reports Server (NTRS)

    Murphy, J. P.

    1976-01-01

    The video inertial pointing (VIP) system is being developed to satisfy the acquisition and pointing requirements of astronomical telescopes. The VIP system uses a single video sensor to provide star position information that can be used to generate three-axis pointing error signals (multi-star processing) and for input to a cathode ray tube (CRT) display of the star field. The pointing error signals are used to update the telescope's gyro stabilization system (gyro filtering). The CRT display facilitates target acquisition and positioning of the telescope by a remote operator. Linearized small angle equations are used for the multistar processing and a consideration of error performance and singularities lead to star pair location restrictions and equation selection criteria. A discrete steady-state Kalman filter which uses the integration of the gyros is developed and analyzed. The filter includes unit time delays representing asynchronous operations of the VIP microprocessor and video sensor. A digital simulation of a typical gyro stabilized gimbal is developed and used to validate the approach to the gyro filtering.

  11. High-performance electronic image stabilisation for shift and rotation correction

    NASA Astrophysics Data System (ADS)

    Parker, Steve C. J.; Hickman, D. L.; Wu, F.

    2014-06-01

    A novel low size, weight and power (SWaP) video stabiliser called HALO™ is presented that uses a SoC to combine the high processing bandwidth of an FPGA, with the signal processing flexibility of a CPU. An image based architecture is presented that can adapt the tiling of frames to cope with changing scene dynamics. A real-time implementation is then discussed that can generate several hundred optical flow vectors per video frame, to accurately calculate the unwanted rigid body translation and rotation of camera shake. The performance of the HALO™ stabiliser is comprehensively benchmarked against the respected Deshaker 3.0 off-line stabiliser plugin to VirtualDub. Eight different videos are used for benchmarking, simulating: battlefield, surveillance, security and low-level flight applications in both visible and IR wavebands. The results show that HALO™ rivals the performance of Deshaker within its operating envelope. Furthermore, HALO™ may be easily reconfigured to adapt to changing operating conditions or requirements; and can be used to host other video processing functionality like image distortion correction, fusion and contrast enhancement.

  12. Video stereo-laparoscopy system

    NASA Astrophysics Data System (ADS)

    Xiang, Yang; Hu, Jiasheng; Jiang, Huilin

    2006-01-01

    Minimally invasive surgery (MIS) has contributed significantly to patient care by reducing the morbidity associated with more invasive procedures. MIS procedures have become standard treatment for gallbladder disease and some abdominal malignancies. The imaging system has played a major role in the evolving field of minimally invasive surgery (MIS). The image need to have good resolution, large magnification, especially, the image need to have depth cue at the same time the image have no flicker and suit brightness. The video stereo-laparoscopy system can meet the demand of the doctors. This paper introduces the 3d video laparoscopy has those characteristic, field frequency: 100Hz, the depth space: 150mm, resolution: 10pl/mm. The work principle of the system is introduced in detail, and the optical system and time-division stereo-display system are described briefly in this paper. The system has focusing image lens, it can image on the CCD chip, the optical signal can change the video signal, and through A/D switch of the image processing system become the digital signal, then display the polarized image on the screen of the monitor through the liquid crystal shutters. The doctors with the polarized glasses can watch the 3D image without flicker of the tissue or organ. The 3D video laparoscope system has apply in the MIS field and praised by doctors. Contrast to the traditional 2D video laparoscopy system, it has some merit such as reducing the time of surgery, reducing the problem of surgery and the trained time.

  13. Privacy-protecting video surveillance

    NASA Astrophysics Data System (ADS)

    Wickramasuriya, Jehan; Alhazzazi, Mohanned; Datt, Mahesh; Mehrotra, Sharad; Venkatasubramanian, Nalini

    2005-02-01

    Forms of surveillance are very quickly becoming an integral part of crime control policy, crisis management, social control theory and community consciousness. In turn, it has been used as a simple and effective solution to many of these problems. However, privacy-related concerns have been expressed over the development and deployment of this technology. Used properly, video cameras help expose wrongdoing but typically come at the cost of privacy to those not involved in any maleficent activity. This work describes the design and implementation of a real-time, privacy-protecting video surveillance infrastructure that fuses additional sensor information (e.g. Radio-frequency Identification) with video streams and an access control framework in order to make decisions about how and when to display the individuals under surveillance. This video surveillance system is a particular instance of a more general paradigm of privacy-protecting data collection. In this paper we describe in detail the video processing techniques used in order to achieve real-time tracking of users in pervasive spaces while utilizing the additional sensor data provided by various instrumented sensors. In particular, we discuss background modeling techniques, object tracking and implementation techniques that pertain to the overall development of this system.

  14. A MPEG-4 encoder based on TMS320C6416

    NASA Astrophysics Data System (ADS)

    Li, Gui-ju; Liu, Wei-ning

    2013-08-01

    Engineering and products need to achieve real-time video encoding by DSP, but the high computational complexity and huge amount of data requires that system has high data throughput. In this paper, a real-time MPEG-4 video encoder is designed based on TMS320C6416 platform. The kernel is the DSP of TMS320C6416T and FPGA chip f as the organization and management of video data. In order to control the flow of input and output data. Encoded stream is output using the synchronous serial port. The system has the clock frequency of 1GHz and has up to 8000 MIPS speed processing capacity when running at full speed. Due to the low coding efficiency of MPEG-4 video encoder transferred directly to DSP platform, it is needed to improve the program structure, data structures and algorithms combined with TMS320C6416T characteristics. First: Design the image storage architecture by balancing the calculation spending, storage space cost and EDMA read time factors. Open up a more buffer in memory, each buffer cache 16 lines of video data to be encoded, reconstruction image and reference image including search range. By using the variable alignment mode of the DSP, modifying the definition of structure variables and change the look-up table which occupy larger space with a direct calculation array to save memory space. After the program structure optimization, the program code, all variables, buffering buffers and the interpolation image including the search range can be placed in memory. Then, as to the time-consuming process modules and some functions which are called many times, the corresponding modules are written in parallel assembly language of TMS320C6416T which can increase the running speed. Besides, the motion estimation algorithm is improved by using a cross-hexagon search algorithm, The search speed can be increased obviously. Finally, the execution time, signal-to-noise ratio and compression ratio of a real-time image acquisition sequence is given. The experimental results show that the designed encoder in this paper can accomplish real-time encoding of a 768× 576, 25 frames per second grayscale video. The code rate is 1.5M bits per second.

  15. Affordable multisensor digital video architecture for 360° situational awareness displays

    NASA Astrophysics Data System (ADS)

    Scheiner, Steven P.; Khan, Dina A.; Marecki, Alexander L.; Berman, David A.; Carberry, Dana

    2011-06-01

    One of the major challenges facing today's military ground combat vehicle operations is the ability to achieve and maintain full-spectrum situational awareness while under armor (i.e. closed hatch). Thus, the ability to perform basic tasks such as driving, maintaining local situational awareness, surveillance, and targeting will require a high-density array of real time information be processed, distributed, and presented to the vehicle operators and crew in near real time (i.e. low latency). Advances in display and sensor technologies are providing never before seen opportunities to supply large amounts of high fidelity imagery and video to the vehicle operators and crew in real time. To fully realize the advantages of these emerging display and sensor technologies, an underlying digital architecture must be developed that is capable of processing these large amounts of video and data from separate sensor systems and distributing it simultaneously within the vehicle to multiple vehicle operators and crew. This paper will examine the systems and software engineering efforts required to overcome these challenges and will address development of an affordable, integrated digital video architecture. The approaches evaluated will enable both current and future ground combat vehicle systems the flexibility to readily adopt emerging display and sensor technologies, while optimizing the Warfighter Machine Interface (WMI), minimizing lifecycle costs, and improve the survivability of the vehicle crew working in closed-hatch systems during complex ground combat operations.

  16. Gas leak detection in infrared video with background modeling

    NASA Astrophysics Data System (ADS)

    Zeng, Xiaoxia; Huang, Likun

    2018-03-01

    Background modeling plays an important role in the task of gas detection based on infrared video. VIBE algorithm is a widely used background modeling algorithm in recent years. However, the processing speed of the VIBE algorithm sometimes cannot meet the requirements of some real time detection applications. Therefore, based on the traditional VIBE algorithm, we propose a fast prospect model and optimize the results by combining the connected domain algorithm and the nine-spaces algorithm in the following processing steps. Experiments show the effectiveness of the proposed method.

  17. Three-dimensional video imaging of drainage and imbibition processes in model porous medium

    NASA Astrophysics Data System (ADS)

    Sharma, Prerna; Aswathi, P.; Sane, Anit; Ghosh, Shankar; Bhattacharya, Sabyasachi

    2011-03-01

    We report experimental results where we have performed three dimensional video imaging of the displacement of an oil phase by an aqueous phase and vice versa in a model porous medium. The stability of the oil water interface was studied as a function of their viscosity ratios, the wettability of the porous medium and the variation in the pore size distribution. Our experiments captures the pore scale information of the displacement process and its role in determining the long time structure of the interface.

  18. Breakup phenomena of a coaxial jet in the non-dilute region using real-time X-ray radiography

    NASA Astrophysics Data System (ADS)

    Cheung, F. B.; Kuo, K. K.; Woodward, R. D.; Garner, K. N.

    1990-07-01

    An innovative approach to the investigation of liquid jet breakup processes in the near-injector region has been developed to overcome the experimental difficulties associated with optically opaque, dense sprays. Real-time X-ray radiography (RTR) has been employed to observe the inner structure and breakup phenomena of coaxial jets. In the atomizing regime, droplets much smaller than the exit diameter are formed beginning essentially at the injector exit. Through the use of RTR, the instantaneous contour of the liquid core was visualized. Experimental results consist of controlled-exposure digital video images of the liquid jet breakup process. Time-averaged video images have also been recorded for comparison. A digital image processing system is used to analyze the recorded images by creating radiance level distributions of the jet. A rudimentary method for deducing intact-liquid-core length has been suggested. The technique of real-time X-ray radiography has been shown to be a viable approach to the study of the breakup processes of high-speed liquid jets.

  19. Understanding viral video dynamics through an epidemic modelling approach

    NASA Astrophysics Data System (ADS)

    Sachak-Patwa, Rahil; Fadai, Nabil T.; Van Gorder, Robert A.

    2018-07-01

    Motivated by the hypothesis that the spread of viral videos is analogous to the spread of a disease epidemic, we formulate a novel susceptible-exposed-infected-recovered-susceptible (SEIRS) delay differential equation epidemic model to describe the popularity evolution of viral videos. Our models incorporate time-delay, in order to accurately describe the virtual contact process between individuals and the temporary immunity of individuals to videos after they have grown tired of watching them. We validate our models by fitting model parameters to viewing data from YouTube music videos, in order to demonstrate that the model solutions accurately reproduce real behaviour seen in this data. We use an SEIR model to describe the initial growth and decline of daily views, and an SEIRS model to describe the long term behaviour of the popularity of music videos. We also analyse the decay rates in the daily views of videos, determining whether they follow a power law or exponential distribution. Although we focus on viral videos, the modelling approach may be used to understand dynamics emergent from other areas of science which aim to describe consumer behaviour.

  20. A Macintosh-Based Scientific Images Video Analysis System

    NASA Technical Reports Server (NTRS)

    Groleau, Nicolas; Friedland, Peter (Technical Monitor)

    1994-01-01

    A set of experiments was designed at MIT's Man-Vehicle Laboratory in order to evaluate the effects of zero gravity on the human orientation system. During many of these experiments, the movements of the eyes are recorded on high quality video cassettes. The images must be analyzed off-line to calculate the position of the eyes at every moment in time. To this aim, I have implemented a simple inexpensive computerized system which measures the angle of rotation of the eye from digitized video images. The system is implemented on a desktop Macintosh computer, processes one play-back frame per second and exhibits adequate levels of accuracy and precision. The system uses LabVIEW, a digital output board, and a video input board to control a VCR, digitize video images, analyze them, and provide a user friendly interface for the various phases of the process. The system uses the Concept Vi LabVIEW library (Graftek's Image, Meudon la Foret, France) for image grabbing and displaying as well as translation to and from LabVIEW arrays. Graftek's software layer drives an Image Grabber board from Neotech (Eastleigh, United Kingdom). A Colour Adapter box from Neotech provides adequate video signal synchronization. The system also requires a LabVIEW driven digital output board (MacADIOS II from GW Instruments, Cambridge, MA) controlling a slightly modified VCR remote control used mainly to advance the video tape frame by frame.

  1. Element Genesis - Solving the Mystery (Video Presentation)

    NASA Astrophysics Data System (ADS)

    Mochizuki, Yuko

    2001-10-01

    Our institute (RIKEN) produced a video on nucleosynthesis. Its new English version is presented. Y. M., I. Tanihata, Y. Yano, and R. Boyd are science editors for this. Time length of the video is 30 minutes. The primary characteristic of this video is that we have employed a number of 2-D and 3-Dimensional visualizations and animations based on an updated understanding of nuclear physics and astrophysics. One of the emphasized points is that microscopic physics (i.e., nuclear physics) and macroscopic physics (i.e., astrophysics) are strongly connected. It contains explanation on the chart of the nuclides, nuclear burning in the sun, big-bang nucleosynthesis, stellar nucleosynthesis, ``beta-stability valley", the s-process, the r-process, production of an RI beam, etc., and professors D. Arnett, T. Kajino, K. Langanke, K. Sato, C. Sneden, I. Tanihata, and F.-K. Thielemann appear as interviewees. Our prime target is college freshmen. We hope that this video would be useful for education both in the fields of astrophysics and nuclear physics at universities and even at high schools. Our institute is accordingly developing a distribution system of this video and it will be available soon at the cost price (please visit our web site for details: http://www.rarf.riken.go.jp/video). The Japanese version was awarded the prize of the Minister of Education, Culture, Sports, Science, and Technology of Japan 2001.

  2. Deriving video content type from HEVC bitstream semantics

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio R.

    2014-05-01

    As network service providers seek to improve customer satisfaction and retention levels, they are increasingly moving from traditional quality of service (QoS) driven delivery models to customer-centred quality of experience (QoE) delivery models. QoS models only consider metrics derived from the network however, QoE models also consider metrics derived from within the video sequence itself. Various spatial and temporal characteristics of a video sequence have been proposed, both individually and in combination, to derive methods of classifying video content either on a continuous scale or as a set of discrete classes. QoE models can be divided into three broad categories, full reference, reduced reference and no-reference models. Due to the need to have the original video available at the client for comparison, full reference metrics are of limited practical value in adaptive real-time video applications. Reduced reference metrics often require metadata to be transmitted with the bitstream, while no-reference metrics typically operate in the decompressed domain at the client side and require significant processing to extract spatial and temporal features. This paper proposes a heuristic, no-reference approach to video content classification which is specific to HEVC encoded bitstreams. The HEVC encoder already makes use of spatial characteristics to determine partitioning of coding units and temporal characteristics to determine the splitting of prediction units. We derive a function which approximates the spatio-temporal characteristics of the video sequence by using the weighted averages of the depth at which the coding unit quadtree is split and the prediction mode decision made by the encoder to estimate spatial and temporal characteristics respectively. Since the video content type of a sequence is determined by using high level information parsed from the video stream, spatio-temporal characteristics are identified without the need for full decoding and can be used in a timely manner to aid decision making in QoE oriented adaptive real time streaming.

  3. Healthcare4VideoStorm: Making Smart Decisions Based on Storm Metrics.

    PubMed

    Zhang, Weishan; Duan, Pengcheng; Chen, Xiufeng; Lu, Qinghua

    2016-04-23

    Storm-based stream processing is widely used for real-time large-scale distributed processing. Knowing the run-time status and ensuring performance is critical to providing expected dependability for some applications, e.g., continuous video processing for security surveillance. The existing scheduling strategies' granularity is too coarse to have good performance, and mainly considers network resources without computing resources while scheduling. In this paper, we propose Healthcare4Storm, a framework that finds Storm insights based on Storm metrics to gain knowledge from the health status of an application, finally ending up with smart scheduling decisions. It takes into account both network and computing resources and conducts scheduling at a fine-grained level using tuples instead of topologies. The comprehensive evaluation shows that the proposed framework has good performance and can improve the dependability of the Storm-based applications.

  4. Video image processing to create a speed sensor

    DOT National Transportation Integrated Search

    1999-11-01

    Image processing has been applied to traffic analysis in recent years, with different goals. In the report, a new approach is presented for extracting vehicular speed information, given a sequence of real-time traffic images. We extract moving edges ...

  5. Efficient implementation of neural network deinterlacing

    NASA Astrophysics Data System (ADS)

    Seo, Guiwon; Choi, Hyunsoo; Lee, Chulhee

    2009-02-01

    Interlaced scanning has been widely used in most broadcasting systems. However, there are some undesirable artifacts such as jagged patterns, flickering, and line twitters. Moreover, most recent TV monitors utilize flat panel display technologies such as LCD or PDP monitors and these monitors require progressive formats. Consequently, the conversion of interlaced video into progressive video is required in many applications and a number of deinterlacing methods have been proposed. Recently deinterlacing methods based on neural network have been proposed with good results. On the other hand, with high resolution video contents such as HDTV, the amount of video data to be processed is very large. As a result, the processing time and hardware complexity become an important issue. In this paper, we propose an efficient implementation of neural network deinterlacing using polynomial approximation of the sigmoid function. Experimental results show that these approximations provide equivalent performance with a considerable reduction of complexity. This implementation of neural network deinterlacing can be efficiently incorporated in HW implementation.

  6. Very low cost real time histogram-based contrast enhancer utilizing fixed-point DSP processing

    NASA Astrophysics Data System (ADS)

    McCaffrey, Nathaniel J.; Pantuso, Francis P.

    1998-03-01

    A real time contrast enhancement system utilizing histogram- based algorithms has been developed to operate on standard composite video signals. This low-cost DSP based system is designed with fixed-point algorithms and an off-chip look up table (LUT) to reduce the cost considerably over other contemporary approaches. This paper describes several real- time contrast enhancing systems advanced at the Sarnoff Corporation for high-speed visible and infrared cameras. The fixed-point enhancer was derived from these high performance cameras. The enhancer digitizes analog video and spatially subsamples the stream to qualify the scene's luminance. Simultaneously, the video is streamed through a LUT that has been programmed with the previous calculation. Reducing division operations by subsampling reduces calculation- cycles and also allows the processor to be used with cameras of nominal resolutions. All values are written to the LUT during blanking so no frames are lost. The enhancer measures 13 cm X 6.4 cm X 3.2 cm, operates off 9 VAC and consumes 12 W. This processor is small and inexpensive enough to be mounted with field deployed security cameras and can be used for surveillance, video forensics and real- time medical imaging.

  7. Video-based real-time on-street parking occupancy detection system

    NASA Astrophysics Data System (ADS)

    Bulan, Orhan; Loce, Robert P.; Wu, Wencheng; Wang, YaoRong; Bernal, Edgar A.; Fan, Zhigang

    2013-10-01

    Urban parking management is receiving significant attention due to its potential to reduce traffic congestion, fuel consumption, and emissions. Real-time parking occupancy detection is a critical component of on-street parking management systems, where occupancy information is relayed to drivers via smart phone apps, radio, Internet, on-road signs, or global positioning system auxiliary signals. Video-based parking occupancy detection systems can provide a cost-effective solution to the sensing task while providing additional functionality for traffic law enforcement and surveillance. We present a video-based on-street parking occupancy detection system that can operate in real time. Our system accounts for the inherent challenges that exist in on-street parking settings, including illumination changes, rain, shadows, occlusions, and camera motion. Our method utilizes several components from video processing and computer vision for motion detection, background subtraction, and vehicle detection. We also present three traffic law enforcement applications: parking angle violation detection, parking boundary violation detection, and exclusion zone violation detection, which can be integrated into the parking occupancy cameras as a value-added option. Our experimental results show that the proposed parking occupancy detection method performs in real-time at 5 frames/s and achieves better than 90% detection accuracy across several days of videos captured in a busy street block under various weather conditions such as sunny, cloudy, and rainy, among others.

  8. Non-contact Real-time heart rate measurements based on high speed circuit technology research

    NASA Astrophysics Data System (ADS)

    Wu, Jizhe; Liu, Xiaohua; Kong, Lingqin; Shi, Cong; Liu, Ming; Hui, Mei; Dong, Liquan; Zhao, Yuejin

    2015-08-01

    In recent years, morbidity and mortality of the cardiovascular or cerebrovascular disease, which threaten human health greatly, increased year by year. Heart rate is an important index of these diseases. To address this status, the paper puts forward a kind of simple structure, easy operation, suitable for large populations of daily monitoring non-contact heart rate measurement. In the method we use imaging equipment video sensitive areas. The changes of light intensity reflected through the image grayscale average. The light change is caused by changes in blood volume. We video the people face which include the sensitive areas (ROI), and use high-speed processing circuit to save the video as AVI format into memory. After processing the whole video of a period of time, we draw curve of each color channel with frame number as horizontal axis. Then get heart rate from the curve. We use independent component analysis (ICA) to restrain noise of sports interference, realized the accurate extraction of heart rate signal under the motion state. We design an algorithm, based on high-speed processing circuit, for face recognition and tracking to automatically get face region. We do grayscale average processing to the recognized image, get RGB three grayscale curves, and extract a clearer pulse wave curves through independent component analysis, and then we get the heart rate under the motion state. At last, by means of compare our system with Fingertip Pulse Oximeter, result show the system can realize a more accurate measurement, the error is less than 3 pats per minute.

  9. Low-complexity camera digital signal imaging for video document projection system

    NASA Astrophysics Data System (ADS)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  10. Robust media processing on programmable power-constrained systems

    NASA Astrophysics Data System (ADS)

    McVeigh, Jeff

    2005-03-01

    To achieve consumer-level quality, media systems must process continuous streams of audio and video data while maintaining exacting tolerances on sampling rate, jitter, synchronization, and latency. While it is relatively straightforward to design fixed-function hardware implementations to satisfy worst-case conditions, there is a growing trend to utilize programmable multi-tasking solutions for media applications. The flexibility of these systems enables support for multiple current and future media formats, which can reduce design costs and time-to-market. This paper provides practical engineering solutions to achieve robust media processing on such systems, with specific attention given to power-constrained platforms. The techniques covered in this article utilize the fundamental concepts of algorithm and software optimization, software/hardware partitioning, stream buffering, hierarchical prioritization, and system resource and power management. A novel enhancement to dynamically adjust processor voltage and frequency based on buffer fullness to reduce system power consumption is examined in detail. The application of these techniques is provided in a case study of a portable video player implementation based on a general-purpose processor running a non real-time operating system that achieves robust playback of synchronized H.264 video and MP3 audio from local storage and streaming over 802.11.

  11. Droplet morphometry and velocimetry (DMV): a video processing software for time-resolved, label-free tracking of droplet parameters.

    PubMed

    Basu, Amar S

    2013-05-21

    Emerging assays in droplet microfluidics require the measurement of parameters such as drop size, velocity, trajectory, shape deformation, fluorescence intensity, and others. While micro particle image velocimetry (μPIV) and related techniques are suitable for measuring flow using tracer particles, no tool exists for tracking droplets at the granularity of a single entity. This paper presents droplet morphometry and velocimetry (DMV), a digital video processing software for time-resolved droplet analysis. Droplets are identified through a series of image processing steps which operate on transparent, translucent, fluorescent, or opaque droplets. The steps include background image generation, background subtraction, edge detection, small object removal, morphological close and fill, and shape discrimination. A frame correlation step then links droplets spanning multiple frames via a nearest neighbor search with user-defined matching criteria. Each step can be individually tuned for maximum compatibility. For each droplet found, DMV provides a time-history of 20 different parameters, including trajectory, velocity, area, dimensions, shape deformation, orientation, nearest neighbour spacing, and pixel statistics. The data can be reported via scatter plots, histograms, and tables at the granularity of individual droplets or by statistics accrued over the population. We present several case studies from industry and academic labs, including the measurement of 1) size distributions and flow perturbations in a drop generator, 2) size distributions and mixing rates in drop splitting/merging devices, 3) efficiency of single cell encapsulation devices, 4) position tracking in electrowetting operations, 5) chemical concentrations in a serial drop dilutor, 6) drop sorting efficiency of a tensiophoresis device, 7) plug length and orientation of nonspherical plugs in a serpentine channel, and 8) high throughput tracking of >250 drops in a reinjection system. Performance metrics show that highest accuracy and precision is obtained when the video resolution is >300 pixels per drop. Analysis time increases proportionally with video resolution. The current version of the software provides throughputs of 2-30 fps, suggesting the potential for real time analysis.

  12. Video bioinformatics analysis of human embryonic stem cell colony growth.

    PubMed

    Lin, Sabrina; Fonteno, Shawn; Satish, Shruthi; Bhanu, Bir; Talbot, Prue

    2010-05-20

    Because video data are complex and are comprised of many images, mining information from video material is difficult to do without the aid of computer software. Video bioinformatics is a powerful quantitative approach for extracting spatio-temporal data from video images using computer software to perform dating mining and analysis. In this article, we introduce a video bioinformatics method for quantifying the growth of human embryonic stem cells (hESC) by analyzing time-lapse videos collected in a Nikon BioStation CT incubator equipped with a camera for video imaging. In our experiments, hESC colonies that were attached to Matrigel were filmed for 48 hours in the BioStation CT. To determine the rate of growth of these colonies, recipes were developed using CL-Quant software which enables users to extract various types of data from video images. To accurately evaluate colony growth, three recipes were created. The first segmented the image into the colony and background, the second enhanced the image to define colonies throughout the video sequence accurately, and the third measured the number of pixels in the colony over time. The three recipes were run in sequence on video data collected in a BioStation CT to analyze the rate of growth of individual hESC colonies over 48 hours. To verify the truthfulness of the CL-Quant recipes, the same data were analyzed manually using Adobe Photoshop software. When the data obtained using the CL-Quant recipes and Photoshop were compared, results were virtually identical, indicating the CL-Quant recipes were truthful. The method described here could be applied to any video data to measure growth rates of hESC or other cells that grow in colonies. In addition, other video bioinformatics recipes can be developed in the future for other cell processes such as migration, apoptosis, and cell adhesion.

  13. Comparison of H.265/HEVC encoders

    NASA Astrophysics Data System (ADS)

    Trochimiuk, Maciej

    2016-09-01

    The H.265/HEVC is the state-of-the-art video compression standard, which allows the bitrate reduction up to 50% compared with its predecessor, H.264/AVC, maintaining equal perceptual video quality. The growth in coding efficiency was achieved by increasing the number of available intra- and inter-frame prediction features and improvements in existing ones, such as entropy encoding and filtering. Nevertheless, to achieve real-time performance of the encoder, simplifications in algorithm are inevitable. Some features and coding modes shall be skipped, to reduce time needed to evaluate modes forwarded to rate-distortion optimisation. Thus, the potential acceleration of the encoding process comes at the expense of coding efficiency. In this paper, a trade-off between video quality and encoding speed of various H.265/HEVC encoders is discussed.

  14. Cooperative Educational Project - The Southern Appalachians: A Changing World

    NASA Astrophysics Data System (ADS)

    Clark, S.; Back, J.; Tubiolo, A.; Romanaux, E.

    2001-12-01

    The Southern Appalachian Mountains, a popular recreation area known for its beauty and rich biodiversity, was chosen by the U.S. Geological Survey as the site to produce a video, booklet, and teachers guide to explain basic geologic principles and how long-term geologic processes affect landscapes, ecosystems, and the quality of human life. The video was produced in cooperation with the National Park Service and has benefited from the advice of the Southern Appalachian Man and Biosphere Cooperative, a group of 11 Federal and three State agencies that works to promote the environmental health, stewardship, and sustainable development of the resources of the region. Much of the information in the video is included in the booklet. A teachers guide provides supporting activities that teachers may use to reinforce the concepts presented in the video and booklet. Although the Southern Appalachians include some of the most visited recreation areas in the country, few are aware of the geologic underpinnings that have contributed to the beauty, biological diversity, and quality of human life in the region. The video includes several animated segments that show paleogeographic reconstructions of the Earth and movements of the North American continent over time; the formation of the Ocoee sedimentary basin beginning about 750 million years ago; the collision of the North American and African continents about 270 million years ago; the formation of granites and similar rocks, faults, and geologic windows; and the extent of glaciation in North America. The animated segments are tied to familiar public-access localities in the region. They illustrate geologic processes and time periods, making the geologic setting of the region more understandable to tourists and local students. The video reinforces the concept that understanding geologic processes and settings is an important component of informed land management to sustain the quality of life in a region. The video and a teachers guide will be distributed by the Southern Appalachian Man and Biosphere to local middle and high schools, libraries, and visitors centers in the region. It will be distributed by the U.S. Geological Survey and sold in Park Service and Forest Service gift shops in the region.

  15. Real-time fluorescence target/background (T/B) ratio calculation in multimodal endoscopy for detecting GI tract cancer

    NASA Astrophysics Data System (ADS)

    Jiang, Yang; Gong, Yuanzheng; Wang, Thomas D.; Seibel, Eric J.

    2017-02-01

    Multimodal endoscopy, with fluorescence-labeled probes binding to overexpressed molecular targets, is a promising technology to visualize early-stage cancer. T/B ratio is the quantitative analysis used to correlate fluorescence regions to cancer. Currently, T/B ratio calculation is post-processing and does not provide real-time feedback to the endoscopist. To achieve real-time computer assisted diagnosis (CAD), we establish image processing protocols for calculating T/B ratio and locating high-risk fluorescence regions for guiding biopsy and therapy in Barrett's esophagus (BE) patients. Methods: Chan-Vese algorithm, an active contour model, is used to segment high-risk regions in fluorescence videos. A semi-implicit gradient descent method was applied to minimize the energy function of this algorithm and evolve the segmentation. The surrounding background was then identified using morphology operation. The average T/B ratio was computed and regions of interest were highlighted based on user-selected thresholding. Evaluation was conducted on 50 fluorescence videos acquired from clinical video recordings using a custom multimodal endoscope. Results: With a processing speed of 2 fps on a laptop computer, we obtained accurate segmentation of high-risk regions examined by experts. For each case, the clinical user could optimize target boundary by changing the penalty on area inside the contour. Conclusion: Automatic and real-time procedure of calculating T/B ratio and identifying high-risk regions of early esophageal cancer was developed. Future work will increase processing speed to <5 fps, refine the clinical interface, and apply to additional GI cancers and fluorescence peptides.

  16. Colometer: a real-time quality feedback system for screening colonoscopy.

    PubMed

    Filip, Dobromir; Gao, Xuexin; Angulo-Rodríguez, Leticia; Mintchev, Martin P; Devlin, Shane M; Rostom, Alaa; Rosen, Wayne; Andrews, Christopher N

    2012-08-28

    To investigate the performance of a new software-based colonoscopy quality assessment system. The software-based system employs a novel image processing algorithm which detects the levels of image clarity, withdrawal velocity, and level of the bowel preparation in a real-time fashion from live video signal. Threshold levels of image blurriness and the withdrawal velocity below which the visualization could be considered adequate have initially been determined arbitrarily by review of sample colonoscopy videos by two experienced endoscopists. Subsequently, an overall colonoscopy quality rating was computed based on the percentage of the withdrawal time with adequate visualization (scored 1-5; 1, when the percentage was 1%-20%; 2, when the percentage was 21%-40%, etc.). In order to test the proposed velocity and blurriness thresholds, screening colonoscopy withdrawal videos from a specialized ambulatory colon cancer screening center were collected, automatically processed and rated. Quality ratings on the withdrawal were compared to the insertion in the same patients. Then, 3 experienced endoscopists reviewed the collected videos in a blinded fashion and rated the overall quality of each withdrawal (scored 1-5; 1, poor; 3, average; 5, excellent) based on 3 major aspects: image quality, colon preparation, and withdrawal velocity. The automated quality ratings were compared to the averaged endoscopist quality ratings using Spearman correlation coefficient. Fourteen screening colonoscopies were assessed. Adenomatous polyps were detected in 4/14 (29%) of the collected colonoscopy video samples. As a proof of concept, the Colometer software rated colonoscope withdrawal as having better visualization than the insertion in the 10 videos which did not have any polyps (average percent time with adequate visualization: 79% ± 5% for withdrawal and 50% ± 14% for insertion, P < 0.01). Withdrawal times during which no polyps were removed ranged from 4-12 min. The median quality rating from the automated system and the reviewers was 3.45 [interquartile range (IQR), 3.1-3.68] and 3.00 (IQR, 2.33-3.67) respectively for all colonoscopy video samples. The automated rating revealed a strong correlation with the reviewer's rating (ρ coefficient= 0.65, P = 0.01). There was good correlation of the automated overall quality rating and the mean endoscopist withdrawal speed rating (Spearman r coefficient= 0.59, P = 0.03). There was no correlation of automated overall quality rating with mean endoscopists image quality rating (Spearman r coefficient= 0.41, P = 0.15). The results from a novel automated real-time colonoscopy quality feedback system strongly agreed with the endoscopists' quality assessments. Further study is required to validate this approach.

  17. Combining high-speed SVM learning with CNN feature encoding for real-time target recognition in high-definition video for ISR missions

    NASA Astrophysics Data System (ADS)

    Kroll, Christine; von der Werth, Monika; Leuck, Holger; Stahl, Christoph; Schertler, Klaus

    2017-05-01

    For Intelligence, Surveillance, Reconnaissance (ISR) missions of manned and unmanned air systems typical electrooptical payloads provide high-definition video data which has to be exploited with respect to relevant ground targets in real-time by automatic/assisted target recognition software. Airbus Defence and Space is developing required technologies for real-time sensor exploitation since years and has combined the latest advances of Deep Convolutional Neural Networks (CNN) with a proprietary high-speed Support Vector Machine (SVM) learning method into a powerful object recognition system with impressive results on relevant high-definition video scenes compared to conventional target recognition approaches. This paper describes the principal requirements for real-time target recognition in high-definition video for ISR missions and the Airbus approach of combining an invariant feature extraction using pre-trained CNNs and the high-speed training and classification ability of a novel frequency-domain SVM training method. The frequency-domain approach allows for a highly optimized implementation for General Purpose Computation on a Graphics Processing Unit (GPGPU) and also an efficient training of large training samples. The selected CNN which is pre-trained only once on domain-extrinsic data reveals a highly invariant feature extraction. This allows for a significantly reduced adaptation and training of the target recognition method for new target classes and mission scenarios. A comprehensive training and test dataset was defined and prepared using relevant high-definition airborne video sequences. The assessment concept is explained and performance results are given using the established precision-recall diagrams, average precision and runtime figures on representative test data. A comparison to legacy target recognition approaches shows the impressive performance increase by the proposed CNN+SVM machine-learning approach and the capability of real-time high-definition video exploitation.

  18. MobileASL: intelligibility of sign language video over mobile phones.

    PubMed

    Cavender, Anna; Vanam, Rahul; Barney, Dane K; Ladner, Richard E; Riskin, Eve A

    2008-01-01

    For Deaf people, access to the mobile telephone network in the United States is currently limited to text messaging, forcing communication in English as opposed to American Sign Language (ASL), the preferred language. Because ASL is a visual language, mobile video phones have the potential to give Deaf people access to real-time mobile communication in their preferred language. However, even today's best video compression techniques can not yield intelligible ASL at limited cell phone network bandwidths. Motivated by this constraint, we conducted one focus group and two user studies with members of the Deaf Community to determine the intelligibility effects of video compression techniques that exploit the visual nature of sign language. Inspired by eye tracking results that show high resolution foveal vision is maintained around the face, we studied region-of-interest encodings (where the face is encoded at higher quality) as well as reduced frame rates (where fewer, better quality, frames are displayed every second). At all bit rates studied here, participants preferred moderate quality increases in the face region, sacrificing quality in other regions. They also preferred slightly lower frame rates because they yield better quality frames for a fixed bit rate. The limited processing power of cell phones is a serious concern because a real-time video encoder and decoder will be needed. Choosing less complex settings for the encoder can reduce encoding time, but will affect video quality. We studied the intelligibility effects of this tradeoff and found that we can significantly speed up encoding time without severely affecting intelligibility. These results show promise for real-time access to the current low-bandwidth cell phone network through sign-language-specific encoding techniques.

  19. Temporally rendered automatic cloud extraction (TRACE) system

    NASA Astrophysics Data System (ADS)

    Bodrero, Dennis M.; Yale, James G.; Davis, Roger E.; Rollins, John M.

    1999-10-01

    Smoke/obscurant testing requires that 2D cloud extent be extracted from visible and thermal imagery. These data are used alone or in combination with 2D data from other aspects to make 3D calculations of cloud properties, including dimensions, volume, centroid, travel, and uniformity. Determining cloud extent from imagery has historically been a time-consuming manual process. To reduce time and cost associated with smoke/obscurant data processing, automated methods to extract cloud extent from imagery were investigated. The TRACE system described in this paper was developed and implemented at U.S. Army Dugway Proving Ground, UT by the Science and Technology Corporation--Acuity Imaging Incorporated team with Small Business Innovation Research funding. TRACE uses dynamic background subtraction and 3D fast Fourier transform as primary methods to discriminate the smoke/obscurant cloud from the background. TRACE has been designed to run on a PC-based platform using Windows. The PC-Windows environment was chosen for portability, to give TRACE the maximum flexibility in terms of its interaction with peripheral hardware devices such as video capture boards, removable media drives, network cards, and digital video interfaces. Video for Windows provides all of the necessary tools for the development of the video capture utility in TRACE and allows for interchangeability of video capture boards without any software changes. TRACE is designed to take advantage of future upgrades in all aspects of its component hardware. A comparison of cloud extent determined by TRACE with manual method is included in this paper.

  20. Developing assessment system for wireless capsule endoscopy videos based on event detection

    NASA Astrophysics Data System (ADS)

    Chen, Ying-ju; Yasen, Wisam; Lee, Jeongkyu; Lee, Dongha; Kim, Yongho

    2009-02-01

    Along with the advancing of technology in wireless and miniature camera, Wireless Capsule Endoscopy (WCE), the combination of both, enables a physician to diagnose patient's digestive system without actually perform a surgical procedure. Although WCE is a technical breakthrough that allows physicians to visualize the entire small bowel noninvasively, the video viewing time takes 1 - 2 hours. This is very time consuming for the gastroenterologist. Not only it sets a limit on the wide application of this technology but also it incurs considerable amount of cost. Therefore, it is important to automate such process so that the medical clinicians only focus on interested events. As an extension from our previous work that characterizes the motility of digestive tract in WCE videos, we propose a new assessment system for energy based events detection (EG-EBD) to classify the events in WCE videos. For the system, we first extract general features of a WCE video that can characterize the intestinal contractions in digestive organs. Then, the event boundaries are identified by using High Frequency Content (HFC) function. The segments are classified into WCE event by special features. In this system, we focus on entering duodenum, entering cecum, and active bleeding. This assessment system can be easily extended to discover more WCE events, such as detailed organ segmentation and more diseases, by using new special features. In addition, the system provides a score for every WCE image for each event. Using the event scores, the system helps a specialist to speedup the diagnosis process.

  1. Large-scale machine learning and evaluation platform for real-time traffic surveillance

    NASA Astrophysics Data System (ADS)

    Eichel, Justin A.; Mishra, Akshaya; Miller, Nicholas; Jankovic, Nicholas; Thomas, Mohan A.; Abbott, Tyler; Swanson, Douglas; Keller, Joel

    2016-09-01

    In traffic engineering, vehicle detectors are trained on limited datasets, resulting in poor accuracy when deployed in real-world surveillance applications. Annotating large-scale high-quality datasets is challenging. Typically, these datasets have limited diversity; they do not reflect the real-world operating environment. There is a need for a large-scale, cloud-based positive and negative mining process and a large-scale learning and evaluation system for the application of automatic traffic measurements and classification. The proposed positive and negative mining process addresses the quality of crowd sourced ground truth data through machine learning review and human feedback mechanisms. The proposed learning and evaluation system uses a distributed cloud computing framework to handle data-scaling issues associated with large numbers of samples and a high-dimensional feature space. The system is trained using AdaBoost on 1,000,000 Haar-like features extracted from 70,000 annotated video frames. The trained real-time vehicle detector achieves an accuracy of at least 95% for 1/2 and about 78% for 19/20 of the time when tested on ˜7,500,000 video frames. At the end of 2016, the dataset is expected to have over 1 billion annotated video frames.

  2. A Low Cost Microcomputer System for Process Dynamics and Control Simulations.

    ERIC Educational Resources Information Center

    Crowl, D. A.; Durisin, M. J.

    1983-01-01

    Discusses a video simulator microcomputer system used to provide real-time demonstrations to strengthen students' understanding of process dynamics and control. Also discusses hardware/software and simulations developed using the system. The four simulations model various configurations of a process liquid level tank system. (JN)

  3. Real-time color image processing for forensic fiber investigations

    NASA Astrophysics Data System (ADS)

    Paulsson, Nils

    1995-09-01

    This paper describes a system for automatic fiber debris detection based on color identification. The properties of the system are fast analysis and high selectivity, a necessity when analyzing forensic fiber samples. An ordinary investigation separates the material into well above 100,000 video images to analyze. The system is based on standard techniques such as CCD-camera, motorized sample table, and IBM-compatible PC/AT with add-on-boards for video frame digitalization and stepping motor control as the main parts. It is possible to operate the instrument at full video rate (25 image/s) with aid of the HSI-color system (hue- saturation-intensity) and software optimization. High selectivity is achieved by separating the analysis into several steps. The first step is fast direct color identification of objects in the analyzed video images and the second step analyzes detected objects with a more complex and time consuming stage of the investigation to identify single fiber fragments for subsequent analysis with more selective techniques.

  4. Coupled auralization and virtual video for immersive multimedia displays

    NASA Astrophysics Data System (ADS)

    Henderson, Paul D.; Torres, Rendell R.; Shimizu, Yasushi; Radke, Richard; Lonsway, Brian

    2003-04-01

    The implementation of maximally-immersive interactive multimedia in exhibit spaces requires not only the presentation of realistic visual imagery but also the creation of a perceptually accurate aural experience. While conventional implementations treat audio and video problems as essentially independent, this research seeks to couple the visual sensory information with dynamic auralization in order to enhance perceptual accuracy. An implemented system has been developed for integrating accurate auralizations with virtual video techniques for both interactive presentation and multi-way communication. The current system utilizes a multi-channel loudspeaker array and real-time signal processing techniques for synthesizing the direct sound, early reflections, and reverberant field excited by a moving sound source whose path may be interactively defined in real-time or derived from coupled video tracking data. In this implementation, any virtual acoustic environment may be synthesized and presented in a perceptually-accurate fashion to many participants over a large listening and viewing area. Subject tests support the hypothesis that the cross-modal coupling of aural and visual displays significantly affects perceptual localization accuracy.

  5. In-camera video-stream processing for bandwidth reduction in web inspection

    NASA Astrophysics Data System (ADS)

    Jullien, Graham A.; Li, QiuPing; Hajimowlana, S. Hossain; Morvay, J.; Conflitti, D.; Roberts, James W.; Doody, Brian C.

    1996-02-01

    Automated machine vision systems are now widely used for industrial inspection tasks where video-stream data information is taken in by the camera and then sent out to the inspection system for future processing. In this paper we describe a prototype system for on-line programming of arbitrary real-time video data stream bandwidth reduction algorithms; the output of the camera only contains information that has to be further processed by a host computer. The processing system is built into a DALSA CCD camera and uses a microcontroller interface to download bit-stream data to a XILINXTM FPGA. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The camera communicates to a host computer via an RS-232 link to the microcontroller. Static memory is used to both generate a FIFO interface for buffering defect burst data, and for off-line examination of defect detection data. In addition to providing arbitrary FPGA architectures, the internal program of the microcontroller can also be changed via the host computer and a ROM monitor. This paper describes a prototype system board, mounted inside a DALSA camera, and discusses some of the algorithms currently being implemented for web inspection applications.

  6. A Study of Quality of Service Communication for High-Speed Packet-Switching Computer Sub-Networks

    NASA Technical Reports Server (NTRS)

    Cui, Zhenqian

    1999-01-01

    With the development of high-speed networking technology, computer networks, including local-area networks (LANs), wide-area networks (WANs) and the Internet, are extending their traditional roles of carrying computer data. They are being used for Internet telephony, multimedia applications such as conferencing and video on demand, distributed simulations, and other real-time applications. LANs are even used for distributed real-time process control and computing as a cost-effective approach. Differing from traditional data transfer, these new classes of high-speed network applications (video, audio, real-time process control, and others) are delay sensitive. The usefulness of data depends not only on the correctness of received data, but also the time that data are received. In other words, these new classes of applications require networks to provide guaranteed services or quality of service (QoS). Quality of service can be defined by a set of parameters and reflects a user's expectation about the underlying network's behavior. Traditionally, distinct services are provided by different kinds of networks. Voice services are provided by telephone networks, video services are provided by cable networks, and data transfer services are provided by computer networks. A single network providing different services is called an integrated-services network.

  7. Improved satisfaction of preoperative patients after group video-teaching during interview at preanesthetic evaluation clinic: the experience of a medical center in Taiwan.

    PubMed

    Yang, Ya-Ling; Wang, Kuan-Jen; Chen, Wei-Hao; Chuang, Kuan-Chih; Tseng, Chia-Chih; Liu, Chien-Cheng

    2007-09-01

    Anesthesiologist-directed anesthetic preoperative evaluation clinic (APEC) is used to prepare patients to receive anesthesia for surgery. Studies have shown that APEC can reduce preoperative tests, consultations, surgery delays and cancellations. APEC with video-teaching has been purposed as a medium to provide comprehensive information about the process of anesthesia but it has not been practiced in small groups of patients. It is rational to assume that video-teaching in a small group patients can provide better information to patients to understand the process of anesthesia and in turn improve their satisfaction in anesthesia practice. This study was designed to evaluate the difference of satisfaction between patients who joined in small group video-teaching at APEC and patients who paid a traditional preoperative visit in the waiting area, using questionnaire for evaluation. Totally, 237 eligible patients were included in the study in a space of two months. Patients were divided in two groups; 145 patients joined the small group video-teaching designated as study group and 92 patients who were paid traditional preoperative visit at the waiting area served as control. All patients were requested to fill a special questionnaire after postoperative visit entrusted to two non-medical persons. There were significantly higher scores of satisfaction in anesthesia inclusive of waiting time for surgery in the operation room, attitude towards anesthetic staffs during postoperative visit and management of complications in patients who were offered small group video-teaching in comparison with patients of traditional preoperative visit. The results indicated that APEC with group video-teaching could not only make patients more satisfied with process of anesthesia in elective surgery but also reduce the expenditure of hospitalization and anesthetic manpower.

  8. Highly efficient simulation environment for HDTV video decoder in VLSI design

    NASA Astrophysics Data System (ADS)

    Mao, Xun; Wang, Wei; Gong, Huimin; He, Yan L.; Lou, Jian; Yu, Lu; Yao, Qingdong; Pirsch, Peter

    2002-01-01

    With the increase of the complex of VLSI such as the SoC (System on Chip) of MPEG-2 Video decoder with HDTV scalability especially, simulation and verification of the full design, even as high as the behavior level in HDL, often proves to be very slow, costly and it is difficult to perform full verification until late in the design process. Therefore, they become bottleneck of the procedure of HDTV video decoder design, and influence it's time-to-market mostly. In this paper, the architecture of Hardware/Software Interface of HDTV video decoder is studied, and a Hardware-Software Mixed Simulation (HSMS) platform is proposed to check and correct error in the early design stage, based on the algorithm of MPEG-2 video decoding. The application of HSMS to target system could be achieved by employing several introduced approaches. Those approaches speed up the simulation and verification task without decreasing performance.

  9. Techniques for video compression

    NASA Technical Reports Server (NTRS)

    Wu, Chwan-Hwa

    1995-01-01

    In this report, we present our study on multiprocessor implementation of a MPEG2 encoding algorithm. First, we compare two approaches to implementing video standards, VLSI technology and multiprocessor processing, in terms of design complexity, applications, and cost. Then we evaluate the functional modules of MPEG2 encoding process in terms of their computation time. Two crucial modules are identified based on this evaluation. Then we present our experimental study on the multiprocessor implementation of the two crucial modules. Data partitioning is used for job assignment. Experimental results show that high speedup ratio and good scalability can be achieved by using this kind of job assignment strategy.

  10. Learning patterns of life from intelligence analyst chat

    NASA Astrophysics Data System (ADS)

    Schneider, Michael K.; Alford, Mark; Babko-Malaya, Olga; Blasch, Erik; Chen, Lingji; Crespi, Valentino; HandUber, Jason; Haney, Phil; Nagy, Jim; Richman, Mike; Von Pless, Gregory; Zhu, Howie; Rhodes, Bradley J.

    2016-05-01

    Our Multi-INT Data Association Tool (MIDAT) learns patterns of life (POL) of a geographical area from video analyst observations called out in textual reporting. Typical approaches to learning POLs from video make use of computer vision algorithms to extract locations in space and time of various activities. Such approaches are subject to the detection and tracking performance of the video processing algorithms. Numerous examples of human analysts monitoring live video streams annotating or "calling out" relevant entities and activities exist, such as security analysis, crime-scene forensics, news reports, and sports commentary. This user description typically corresponds with textual capture, such as chat. Although the purpose of these text products is primarily to describe events as they happen, organizations typically archive the reports for extended periods. This archive provides a basis to build POLs. Such POLs are useful for diagnosis to assess activities in an area based on historical context, and for consumers of products, who gain an understanding of historical patterns. MIDAT combines natural language processing, multi-hypothesis tracking, and Multi-INT Activity Pattern Learning and Exploitation (MAPLE) technologies in an end-to-end lab prototype that processes textual products produced by video analysts, infers POLs, and highlights anomalies relative to those POLs with links to "tracks" of related activities performed by the same entity. MIDAT technologies perform well, achieving, for example, a 90% F1-value on extracting activities from the textual reports.

  11. Authoring Data-Driven Videos with DataClips.

    PubMed

    Amini, Fereshteh; Riche, Nathalie Henry; Lee, Bongshin; Monroy-Hernandez, Andres; Irani, Pourang

    2017-01-01

    Data videos, or short data-driven motion graphics, are an increasingly popular medium for storytelling. However, creating data videos is difficult as it involves pulling together a unique combination of skills. We introduce DataClips, an authoring tool aimed at lowering the barriers to crafting data videos. DataClips allows non-experts to assemble data-driven "clips" together to form longer sequences. We constructed the library of data clips by analyzing the composition of over 70 data videos produced by reputable sources such as The New York Times and The Guardian. We demonstrate that DataClips can reproduce over 90% of our data videos corpus. We also report on a qualitative study comparing the authoring process and outcome achieved by (1) non-experts using DataClips, and (2) experts using Adobe Illustrator and After Effects to create data-driven clips. Results indicated that non-experts are able to learn and use DataClips with a short training period. In the span of one hour, they were able to produce more videos than experts using a professional editing tool, and their clips were rated similarly by an independent audience.

  12. Parallel Key Frame Extraction for Surveillance Video Service in a Smart City.

    PubMed

    Zheng, Ran; Yao, Chuanwei; Jin, Hai; Zhu, Lei; Zhang, Qin; Deng, Wei

    2015-01-01

    Surveillance video service (SVS) is one of the most important services provided in a smart city. It is very important for the utilization of SVS to provide design efficient surveillance video analysis techniques. Key frame extraction is a simple yet effective technique to achieve this goal. In surveillance video applications, key frames are typically used to summarize important video content. It is very important and essential to extract key frames accurately and efficiently. A novel approach is proposed to extract key frames from traffic surveillance videos based on GPU (graphics processing units) to ensure high efficiency and accuracy. For the determination of key frames, motion is a more salient feature in presenting actions or events, especially in surveillance videos. The motion feature is extracted in GPU to reduce running time. It is also smoothed to reduce noise, and the frames with local maxima of motion information are selected as the final key frames. The experimental results show that this approach can extract key frames more accurately and efficiently compared with several other methods.

  13. Space Shuttle Main Engine Propellant Path Leak Detection Using Sequential Image Processing

    NASA Technical Reports Server (NTRS)

    Smith, L. Montgomery; Malone, Jo Anne; Crawford, Roger A.

    1995-01-01

    Initial research in this study using theoretical radiation transport models established that the occurrence of a leak is accompanies by a sudden but sustained change in intensity in a given region of an image. In this phase, temporal processing of video images on a frame-by-frame basis was used to detect leaks within a given field of view. The leak detection algorithm developed in this study consists of a digital highpass filter cascaded with a moving average filter. The absolute value of the resulting discrete sequence is then taken and compared to a threshold value to produce the binary leak/no leak decision at each point in the image. Alternatively, averaging over the full frame of the output image produces a single time-varying mean value estimate that is indicative of the intensity and extent of a leak. Laboratory experiments were conducted in which artificially created leaks on a simulated SSME background were produced and recorded from a visible wavelength video camera. This data was processed frame-by-frame over the time interval of interest using an image processor implementation of the leak detection algorithm. In addition, a 20 second video sequence of an actual SSME failure was analyzed using this technique. The resulting output image sequences and plots of the full frame mean value versus time verify the effectiveness of the system.

  14. An Insect Eye Inspired Miniaturized Multi-Camera System for Endoscopic Imaging.

    PubMed

    Cogal, Omer; Leblebici, Yusuf

    2017-02-01

    In this work, we present a miniaturized high definition vision system inspired by insect eyes, with a distributed illumination method, which can work in dark environments for proximity imaging applications such as endoscopy. Our approach is based on modeling biological systems with off-the-shelf miniaturized cameras combined with digital circuit design for real time image processing. We built a 5 mm radius hemispherical compound eye, imaging a 180 ° ×180 ° degrees field of view while providing more than 1.1 megapixels (emulated ommatidias) as real-time video with an inter-ommatidial angle ∆ϕ = 0.5 ° at 18 mm radial distance. We made an FPGA implementation of the image processing system which is capable of generating 25 fps video with 1080 × 1080 pixel resolution at a 120 MHz processing clock frequency. When compared to similar size insect eye mimicking systems in literature, the system proposed in this paper features 1000 × resolution increase. To the best of our knowledge, this is the first time that a compound eye with built-in illumination idea is reported. We are offering our miniaturized imaging system for endoscopic applications like colonoscopy or laparoscopic surgery where there is a need for large field of view high definition imagery. For that purpose we tested our system inside a human colon model. We also present the resulting images and videos from the human colon model in this paper.

  15. Implementation of an RBF neural network on embedded systems: real-time face tracking and identity verification.

    PubMed

    Yang, Fan; Paindavoine, M

    2003-01-01

    This paper describes a real time vision system that allows us to localize faces in video sequences and verify their identity. These processes are image processing techniques based on the radial basis function (RBF) neural network approach. The robustness of this system has been evaluated quantitatively on eight video sequences. We have adapted our model for an application of face recognition using the Olivetti Research Laboratory (ORL), Cambridge, UK, database so as to compare the performance against other systems. We also describe three hardware implementations of our model on embedded systems based on the field programmable gate array (FPGA), zero instruction set computer (ZISC) chips, and digital signal processor (DSP) TMS320C62, respectively. We analyze the algorithm complexity and present results of hardware implementations in terms of the resources used and processing speed. The success rates of face tracking and identity verification are 92% (FPGA), 85% (ZISC), and 98.2% (DSP), respectively. For the three embedded systems, the processing speeds for images size of 288 /spl times/ 352 are 14 images/s, 25 images/s, and 4.8 images/s, respectively.

  16. Gender differences in BOLD activation to face photographs and video vignettes.

    PubMed

    Fine, Jodene Goldenring; Semrud-Clikeman, Margaret; Zhu, David C

    2009-07-19

    Few neuroimaging studies have reported gender differences in response to human emotions, and those that have examined such differences have utilized face photographs. This study presented not only human face photographs of positive and negative emotions, but also video vignettes of positive and negative social human interactions in an attempt to provide a more ecologically appropriate stimuli paradigm. Ten male and 10 female healthy right-handed young adults were shown positive and negative affective social human faces and video vignettes to elicit gender differences in social/emotional perception. Conservative ROI (region of interest) analysis indicated greater male than female activation to positive affective photos in the anterior cingulate, medial frontal gyrus, superior frontal gyrus and superior temporal gyrus, all in the right hemisphere. No significant ROI gender differences were observed to negative affective photos. Male greater than female activation was seen in ROIs of the left posterior cingulate and the right inferior temporal gyrus to positive social videos. Male greater than female activation occurred in only the left middle temporal ROI for negative social videos. Consistent with previous findings, males were more lateralized than females. Although more activation was observed overall to video compared to photo conditions, males and females appear to process social video stimuli more similarly to one another than they do for photos. This study is a step forward in understanding the social brain with more ecologically valid stimuli that more closely approximates the demands of real-time social and affective processing.

  17. Informative-frame filtering in endoscopy videos

    NASA Astrophysics Data System (ADS)

    An, Yong Hwan; Hwang, Sae; Oh, JungHwan; Lee, JeongKyu; Tavanapong, Wallapak; de Groen, Piet C.; Wong, Johnny

    2005-04-01

    Advances in video technology are being incorporated into today"s healthcare practice. For example, colonoscopy is an important screening tool for colorectal cancer. Colonoscopy allows for the inspection of the entire colon and provides the ability to perform a number of therapeutic operations during a single procedure. During a colonoscopic procedure, a tiny video camera at the tip of the endoscope generates a video signal of the internal mucosa of the colon. The video data are displayed on a monitor for real-time analysis by the endoscopist. Other endoscopic procedures include upper gastrointestinal endoscopy, enteroscopy, bronchoscopy, cystoscopy, and laparoscopy. However, a significant number of out-of-focus frames are included in this type of videos since current endoscopes are equipped with a single, wide-angle lens that cannot be focused. The out-of-focus frames do not hold any useful information. To reduce the burdens of the further processes such as computer-aided image processing or human expert"s examinations, these frames need to be removed. We call an out-of-focus frame as non-informative frame and an in-focus frame as informative frame. We propose a new technique to classify the video frames into two classes, informative and non-informative frames using a combination of Discrete Fourier Transform (DFT), Texture Analysis, and K-Means Clustering. The proposed technique can evaluate the frames without any reference image, and does not need any predefined threshold value. Our experimental studies indicate that it achieves over 96% of four different performance metrics (i.e. precision, sensitivity, specificity, and accuracy).

  18. A gradient method for the quantitative analysis of cell movement and tissue flow and its application to the analysis of multicellular Dictyostelium development.

    PubMed

    Siegert, F; Weijer, C J; Nomura, A; Miike, H

    1994-01-01

    We describe the application of a novel image processing method, which allows quantitative analysis of cell and tissue movement in a series of digitized video images. The result is a vector velocity field showing average direction and velocity of movement for every pixel in the frame. We apply this method to the analysis of cell movement during different stages of the Dictyostelium developmental cycle. We analysed time-lapse video recordings of cell movement in single cells, mounds and slugs. The program can correctly assess the speed and direction of movement of either unlabelled or labelled cells in a time series of video images depending on the illumination conditions. Our analysis of cell movement during multicellular development shows that the entire morphogenesis of Dictyostelium is characterized by rotational cell movement. The analysis of cell and tissue movement by the velocity field method should be applicable to the analysis of morphogenetic processes in other systems such as gastrulation and neurulation in vertebrate embryos.

  19. Detection of illegal transfer of videos over the Internet

    NASA Astrophysics Data System (ADS)

    Chaisorn, Lekha; Sainui, Janya; Manders, Corey

    2010-07-01

    In this paper, a method for detecting infringements or modifications of a video in real-time is proposed. The method first segments a video stream into shots, after which it extracts some reference frames as keyframes. This process is performed employing a Singular Value Decomposition (SVD) technique developed in this work. Next, for each input video (represented by its keyframes), ordinal-based signature and SIFT (Scale Invariant Feature Transform) descriptors are generated. The ordinal-based method employs a two-level bitmap indexing scheme to construct the index for each video signature. The first level clusters all input keyframes into k clusters while the second level converts the ordinal-based signatures into bitmap vectors. On the other hand, the SIFT-based method directly uses the descriptors as the index. Given a suspect video (being streamed or transferred on the Internet), we generate the signature (ordinal and SIFT descriptors) then we compute similarity between its signature and those signatures in the database based on ordinal signature and SIFT descriptors separately. For similarity measure, besides the Euclidean distance, Boolean operators are also utilized during the matching process. We have tested our system by performing several experiments on 50 videos (each about 1/2 hour in duration) obtained from the TRECVID 2006 data set. For experiments set up, we refer to the conditions provided by TRECVID 2009 on "Content-based copy detection" task. In addition, we also refer to the requirements issued in the call for proposals by MPEG standard on the similar task. Initial result shows that our framework is effective and robust. As compared to our previous work, on top of the achievement we obtained by reducing the storage space and time taken in the ordinal based method, by introducing the SIFT features, we could achieve an overall accuracy in F1 measure of about 96% (improved about 8%).

  20. Student Self-Assessment and Faculty Assessment of Performance in an Interprofessional Error Disclosure Simulation Training Program.

    PubMed

    Poirier, Therese I; Pailden, Junvie; Jhala, Ray; Ronald, Katie; Wilhelm, Miranda; Fan, Jingyang

    2017-04-01

    Objectives. To conduct a prospective evaluation for effectiveness of an error disclosure assessment tool and video recordings to enhance student learning and metacognitive skills while assessing the IPEC competencies. Design. The instruments for assessing performance (planning, communication, process, and team dynamics) in interprofessional error disclosure were developed. Student self-assessment of performance before and after viewing the recordings of their encounters were obtained. Faculty used a similar instrument to conduct real-time assessments. An instrument to assess achievement of the Interprofessional Education Collaborative (IPEC) core competencies was developed. Qualitative data was reviewed to determine student and faculty perceptions of the simulation. Assessment. The interprofessional simulation training involved a total of 233 students (50 dental, 109 nursing and 74 pharmacy). Use of video recordings made a significant difference in student self-assessment for communication and process categories of error disclosure. No differences in student self-assessments were noted among the different professions. There were differences among the family member affects for planning and communication for both pre-video and post-video data. There were significant differences between student self-assessment and faculty assessment for all paired comparisons, except communication in student post-video self-assessment. Students' perceptions of achievement of the IPEC core competencies were positive. Conclusion. The use of assessment instruments and video recordings may have enhanced students' metacognitive skills for assessing performance in interprofessional error disclosure. The simulation training was effective in enhancing perceptions on achievement of IPEC core competencies. This enhanced assessment process appeared to enhance learning about the skills needed for interprofessional error disclosure.

  1. Student Self-Assessment and Faculty Assessment of Performance in an Interprofessional Error Disclosure Simulation Training Program

    PubMed Central

    Pailden, Junvie; Jhala, Ray; Ronald, Katie; Wilhelm, Miranda; Fan, Jingyang

    2017-01-01

    Objectives. To conduct a prospective evaluation for effectiveness of an error disclosure assessment tool and video recordings to enhance student learning and metacognitive skills while assessing the IPEC competencies. Design. The instruments for assessing performance (planning, communication, process, and team dynamics) in interprofessional error disclosure were developed. Student self-assessment of performance before and after viewing the recordings of their encounters were obtained. Faculty used a similar instrument to conduct real-time assessments. An instrument to assess achievement of the Interprofessional Education Collaborative (IPEC) core competencies was developed. Qualitative data was reviewed to determine student and faculty perceptions of the simulation. Assessment. The interprofessional simulation training involved a total of 233 students (50 dental, 109 nursing and 74 pharmacy). Use of video recordings made a significant difference in student self-assessment for communication and process categories of error disclosure. No differences in student self-assessments were noted among the different professions. There were differences among the family member affects for planning and communication for both pre-video and post-video data. There were significant differences between student self-assessment and faculty assessment for all paired comparisons, except communication in student post-video self-assessment. Students’ perceptions of achievement of the IPEC core competencies were positive. Conclusion. The use of assessment instruments and video recordings may have enhanced students’ metacognitive skills for assessing performance in interprofessional error disclosure. The simulation training was effective in enhancing perceptions on achievement of IPEC core competencies. This enhanced assessment process appeared to enhance learning about the skills needed for interprofessional error disclosure. PMID:28496274

  2. DSP Implementation of the Retinex Image Enhancement Algorithm

    NASA Technical Reports Server (NTRS)

    Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn

    2004-01-01

    The Retinex is a general-purpose image enhancement algorithm that is used to produce good visual representations of scenes. It performs a non-linear spatial/spectral transform that synthesizes strong local contrast enhancement and color constancy. A real-time, video frame rate implementation of the Retinex is required to meet the needs of various potential users. Retinex processing contains a relatively large number of complex computations, thus to achieve real-time performance using current technologies requires specialized hardware and software. In this paper we discuss the design and development of a digital signal processor (DSP) implementation of the Retinex. The target processor is a Texas Instruments TMS320C6711 floating point DSP. NTSC video is captured using a dedicated frame-grabber card, Retinex processed, and displayed on a standard monitor. We discuss the optimizations used to achieve real-time performance of the Retinex and also describe our future plans on using alternative architectures.

  3. Video encryption using chaotic masks in joint transform correlator

    NASA Astrophysics Data System (ADS)

    Saini, Nirmala; Sinha, Aloka

    2015-03-01

    A real-time optical video encryption technique using a chaotic map has been reported. In the proposed technique, each frame of video is encrypted using two different chaotic random phase masks in the joint transform correlator architecture. The different chaotic random phase masks can be obtained either by using different iteration levels or by using different seed values of the chaotic map. The use of different chaotic random phase masks makes the decryption process very complex for an unauthorized person. Optical, as well as digital, methods can be used for video encryption but the decryption is possible only digitally. To further enhance the security of the system, the key parameters of the chaotic map are encoded using RSA (Rivest-Shamir-Adleman) public key encryption. Numerical simulations are carried out to validate the proposed technique.

  4. The AAPM/RSNA physics tutorial for residents: digital fluoroscopy.

    PubMed

    Pooley, R A; McKinney, J M; Miller, D A

    2001-01-01

    A digital fluoroscopy system is most commonly configured as a conventional fluoroscopy system (tube, table, image intensifier, video system) in which the analog video signal is converted to and stored as digital data. Other methods of acquiring the digital data (eg, digital or charge-coupled device video and flat-panel detectors) will become more prevalent in the future. Fundamental concepts related to digital imaging in general include binary numbers, pixels, and gray levels. Digital image data allow the convenient use of several image processing techniques including last image hold, gray-scale processing, temporal frame averaging, and edge enhancement. Real-time subtraction of digital fluoroscopic images after injection of contrast material has led to widespread use of digital subtraction angiography (DSA). Additional image processing techniques used with DSA include road mapping, image fade, mask pixel shift, frame summation, and vessel size measurement. Peripheral angiography performed with an automatic moving table allows imaging of the peripheral vasculature with a single contrast material injection.

  5. Video Mosaicking for Inspection of Gas Pipelines

    NASA Technical Reports Server (NTRS)

    Magruder, Darby; Chien, Chiun-Hong

    2005-01-01

    A vision system that includes a specially designed video camera and an image-data-processing computer is under development as a prototype of robotic systems for visual inspection of the interior surfaces of pipes and especially of gas pipelines. The system is capable of providing both forward views and mosaicked radial views that can be displayed in real time or after inspection. To avoid the complexities associated with moving parts and to provide simultaneous forward and radial views, the video camera is equipped with a wide-angle (>165 ) fish-eye lens aimed along the axis of a pipe to be inspected. Nine white-light-emitting diodes (LEDs) placed just outside the field of view of the lens (see Figure 1) provide ample diffuse illumination for a high-contrast image of the interior pipe wall. The video camera contains a 2/3-in. (1.7-cm) charge-coupled-device (CCD) photodetector array and functions according to the National Television Standards Committee (NTSC) standard. The video output of the camera is sent to an off-the-shelf video capture board (frame grabber) by use of a peripheral component interconnect (PCI) interface in the computer, which is of the 400-MHz, Pentium II (or equivalent) class. Prior video-mosaicking techniques are applicable to narrow-field-of-view (low-distortion) images of evenly illuminated, relatively flat surfaces viewed along approximately perpendicular lines by cameras that do not rotate and that move approximately parallel to the viewed surfaces. One such technique for real-time creation of mosaic images of the ocean floor involves the use of visual correspondences based on area correlation, during both the acquisition of separate images of adjacent areas and the consolidation (equivalently, integration) of the separate images into a mosaic image, in order to insure that there are no gaps in the mosaic image. The data-processing technique used for mosaicking in the present system also involves area correlation, but with several notable differences: Because the wide-angle lens introduces considerable distortion, the image data must be processed to effectively unwarp the images (see Figure 2). The computer executes special software that includes an unwarping algorithm that takes explicit account of the cylindrical pipe geometry. To reduce the processing time needed for unwarping, parameters of the geometric mapping between the circular view of a fisheye lens and pipe wall are determined in advance from calibration images and compiled into an electronic lookup table. The software incorporates the assumption that the optical axis of the camera is parallel (rather than perpendicular) to the direction of motion of the camera. The software also compensates for the decrease in illumination with distance from the ring of LEDs.

  6. Data Visualization and Animation Lab (DVAL) overview

    NASA Technical Reports Server (NTRS)

    Stacy, Kathy; Vonofenheim, Bill

    1994-01-01

    The general capabilities of the Langley Research Center Data Visualization and Animation Laboratory is described. These capabilities include digital image processing, 3-D interactive computer graphics, data visualization and analysis, video-rate acquisition and processing of video images, photo-realistic modeling and animation, video report generation, and color hardcopies. A specialized video image processing system is also discussed.

  7. If a Picture Is Worth a Thousand Words Is Video Worth a Million? Differences in Affective and Cognitive Processing of Video and Text Cases

    ERIC Educational Resources Information Center

    Yadav, Aman; Phillips, Michael M.; Lundeberg, Mary A.; Koehler, Matthew J.; Hilden, Katherine; Dirkin, Kathryn H.

    2011-01-01

    In this investigation we assessed whether different formats of media (video, text, and video + text) influenced participants' engagement, cognitive processing and recall of non-fiction cases of people diagnosed with HIV/AIDS. For each of the cases used in the study, we designed three informationally-equivalent versions: video, text, and video +…

  8. Compression performance comparison in low delay real-time video for mobile applications

    NASA Astrophysics Data System (ADS)

    Bivolarski, Lazar

    2012-10-01

    This article compares the performance of several current video coding standards in the conditions of low-delay real-time in a resource constrained environment. The comparison is performed using the same content and the metrics and mix of objective and perceptual quality metrics. The metrics results in different coding schemes are analyzed from a point of view of user perception and quality of service. Multiple standards are compared MPEG-2, MPEG4 and MPEG-AVC and well and H.263. The metrics used in the comparison include SSIM, VQM and DVQ. Subjective evaluation and quality of service are discussed from a point of view of perceptual metrics and their incorporation in the coding scheme development process. The performance and the correlation of results are presented as a predictor of the performance of video compression schemes.

  9. Recent experiences with implementing a video based six degree of freedom measurement system for airplane models in a 20 foot diameter vertical spin tunnel

    NASA Technical Reports Server (NTRS)

    Snow, Walter L.; Childers, Brooks A.; Jones, Stephen B.; Fremaux, Charles M.

    1993-01-01

    A model space positioning system (MSPS), a state-of-the-art, real-time tracking system to provide the test engineer with on line model pitch and spin rate information, is described. It is noted that the six-degree-of-freedom post processor program will require additional programming effort both in the automated tracking mode for high spin rates and in accuracy to meet the measurement objectives. An independent multicamera system intended to augment the MSPS is studied using laboratory calibration methods based on photogrammetry to characterize the losses in various recording options. Data acquired to Super VHS tape encoded with Vertical Interval Time Code and transcribed to video disk are considered to be a reasonable priced choice for post editing and processing video data.

  10. The Decoy Duck.

    ERIC Educational Resources Information Center

    Ryan, Anna

    1997-01-01

    Describes the development processes of an instructional video for use in a course offered through the Extended Learning Institute of Northern Virginia Community College entitled Women Writers II. Characterizes the process of transforming this English course from a print-based to a distance-learning course as time-consuming, creative, and…

  11. Scalable software architecture for on-line multi-camera video processing

    NASA Astrophysics Data System (ADS)

    Camplani, Massimo; Salgado, Luis

    2011-03-01

    In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhead.

  12. Two schemes for rapid generation of digital video holograms using PC cluster

    NASA Astrophysics Data System (ADS)

    Park, Hanhoon; Song, Joongseok; Kim, Changseob; Park, Jong-Il

    2017-12-01

    Computer-generated holography (CGH), which is a process of generating digital holograms, is computationally expensive. Recently, several methods/systems of parallelizing the process using graphic processing units (GPUs) have been proposed. Indeed, use of multiple GPUs or a personal computer (PC) cluster (each PC with GPUs) enabled great improvements in the process speed. However, extant literature has less often explored systems involving rapid generation of multiple digital holograms and specialized systems for rapid generation of a digital video hologram. This study proposes a system that uses a PC cluster and is able to more efficiently generate a video hologram. The proposed system is designed to simultaneously generate multiple frames and accelerate the generation by parallelizing the CGH computations across a number of frames, as opposed to separately generating each individual frame while parallelizing the CGH computations within each frame. The proposed system also enables the subprocesses for generating each frame to execute in parallel through multithreading. With these two schemes, the proposed system significantly reduced the data communication time for generating a digital hologram when compared with that of the state-of-the-art system.

  13. Real-Time Visualization of Tissue Ischemia

    NASA Technical Reports Server (NTRS)

    Bearman, Gregory H. (Inventor); Chrien, Thomas D. (Inventor); Eastwood, Michael L. (Inventor)

    2000-01-01

    A real-time display of tissue ischemia which comprises three CCD video cameras, each with a narrow bandwidth filter at the correct wavelength is discussed. The cameras simultaneously view an area of tissue suspected of having ischemic areas through beamsplitters. The output from each camera is adjusted to give the correct signal intensity for combining with, the others into an image for display. If necessary a digital signal processor (DSP) can implement algorithms for image enhancement prior to display. Current DSP engines are fast enough to give real-time display. Measurement at three, wavelengths, combined into a real-time Red-Green-Blue (RGB) video display with a digital signal processing (DSP) board to implement image algorithms, provides direct visualization of ischemic areas.

  14. Improving health care workers' protection against infection of Ebola hemorrhagic fever through video surveillance.

    PubMed

    Xi, Huijun; Cao, Jie; Liu, Jingjing; Li, Zhaoshen; Kong, Xiangyu; Wang, Yonghua; Chen, Jing; Ma, Su; Zhang, Lingjuan

    2016-08-01

    The purpose of this study was to investigate the importance of supervision through video surveillance in improving the quality of personal protection in preparing health care workers working in Ebola treatment units. Wardens supervise, remind, and guide health care workers' behavior through onsite voice and video systems when they are in the suspected patient observation ward and in the patient diagnosed ward of the Ebola treatment center. The observation results were recorded, and timely feedback was given to the health care workers. After 2 months of supervision, 1,797 cases of incorrect personal protection behaviors were identified and corrected. The error rate continuously declined. The declined rate of the first 2 weeks was statistically different from other time periods. Through reminding and supervising, nonstandard personal protective behaviors can be discovered and corrected, which can help health care workers standardize personal protection. The timely feedback from video surveillance can also offer psychologic support and encouragement promptly to ease psychologic pressure. Finally, this can ensure health care workers stay at a zero infection rate during patient treatment. Personal protective equipment protocol supervised by wardens through a video monitoring process can be used as an effective complement to conventional mutual supervision methods and can help health care workers avoid Ebola infection during treatment. Copyright © 2016 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  15. Automated multiple target detection and tracking in UAV videos

    NASA Astrophysics Data System (ADS)

    Mao, Hongwei; Yang, Chenhui; Abousleman, Glen P.; Si, Jennie

    2010-04-01

    In this paper, a novel system is presented to detect and track multiple targets in Unmanned Air Vehicles (UAV) video sequences. Since the output of the system is based on target motion, we first segment foreground moving areas from the background in each video frame using background subtraction. To stabilize the video, a multi-point-descriptor-based image registration method is performed where a projective model is employed to describe the global transformation between frames. For each detected foreground blob, an object model is used to describe its appearance and motion information. Rather than immediately classifying the detected objects as targets, we track them for a certain period of time and only those with qualified motion patterns are labeled as targets. In the subsequent tracking process, a Kalman filter is assigned to each tracked target to dynamically estimate its position in each frame. Blobs detected at a later time are used as observations to update the state of the tracked targets to which they are associated. The proposed overlap-rate-based data association method considers the splitting and merging of the observations, and therefore is able to maintain tracks more consistently. Experimental results demonstrate that the system performs well on real-world UAV video sequences. Moreover, careful consideration given to each component in the system has made the proposed system feasible for real-time applications.

  16. Design and application of complementary educational resources for self-learning methodology

    NASA Astrophysics Data System (ADS)

    Andrés Gilarranz Casado, Carlos; Rodriguez-Sinobas, Leonor

    2016-04-01

    The main goal of this work is enhanced the student`s self-learning in subjects regarding irrigation and its technology. Thus, the use of visual media (video recording) during the lectures (master classes and practicum) will help the students in understanding the scope of the course since they can watch the recorded material at any time and as many times they wish. The study comprised two parts. In the first, lectures were video filmed inside the classroom during one semester (16 weeks and four hours per week) in the course "Irrigation Systems and Technology" which is taught at the Technical University of Madrid. In total, 200 videos, approximated 12 min long, were recorded. Since the You tube platform is a worldwide platform and since it is commonly used by students and professors, the videos were uploaded in it. Then, the URL was inserted in the Moodle platform which contains the materials for the course. In the second part, the videos were edited and formatted. Special care was taking to maintain image and audio quality. Finally, thirty videos were developed which focused on the different main areas of the course and containing a clear and brief explanation of their basis. Each video lasted between 30 and 45 min Finally, a survey was handled at the end of the semester in order to assess the students' opinion about the methodology. In the questionnaire, the students highlighted the key aspects during the learning process and in general, they were very satisfied with the methodology.

  17. Low Cost Efficient Deliverying Video Surveillance Service to Moving Guard for Smart Home.

    PubMed

    Gualotuña, Tatiana; Macías, Elsa; Suárez, Álvaro; C, Efraín R Fonseca; Rivadeneira, Andrés

    2018-03-01

    Low-cost video surveillance systems are attractive for Smart Home applications (especially in emerging economies). Those systems use the flexibility of the Internet of Things to operate the video camera only when an intrusion is detected. We are the only ones that focus on the design of protocols based on intelligent agents to communicate the video of an intrusion in real time to the guards by wireless or mobile networks. The goal is to communicate, in real time, the video to the guards who can be moving towards the smart home. However, this communication suffers from sporadic disruptions that difficults the control and drastically reduces user satisfaction and operativity of the system. In a novel way, we have designed a generic software architecture based on design patterns that can be adapted to any hardware in a simple way. The implanted hardware is of very low economic cost; the software frameworks are free. In the experimental tests we have shown that it is possible to communicate to the moving guard, intrusion notifications (by e-mail and by instant messaging), and the first video frames in less than 20 s. In addition, we automatically recovered the frames of video lost in the disruptions in a transparent way to the user, we supported vertical handover processes and we could save energy of the smartphone's battery. However, the most important thing was that the high satisfaction of the people who have used the system.

  18. Low Cost Efficient Deliverying Video Surveillance Service to Moving Guard for Smart Home

    PubMed Central

    Gualotuña, Tatiana; Fonseca C., Efraín R.; Rivadeneira, Andrés

    2018-01-01

    Low-cost video surveillance systems are attractive for Smart Home applications (especially in emerging economies). Those systems use the flexibility of the Internet of Things to operate the video camera only when an intrusion is detected. We are the only ones that focus on the design of protocols based on intelligent agents to communicate the video of an intrusion in real time to the guards by wireless or mobile networks. The goal is to communicate, in real time, the video to the guards who can be moving towards the smart home. However, this communication suffers from sporadic disruptions that difficults the control and drastically reduces user satisfaction and operativity of the system. In a novel way, we have designed a generic software architecture based on design patterns that can be adapted to any hardware in a simple way. The implanted hardware is of very low economic cost; the software frameworks are free. In the experimental tests we have shown that it is possible to communicate to the moving guard, intrusion notifications (by e-mail and by instant messaging), and the first video frames in less than 20 s. In addition, we automatically recovered the frames of video lost in the disruptions in a transparent way to the user, we supported vertical handover processes and we could save energy of the smartphone's battery. However, the most important thing was that the high satisfaction of the people who have used the system. PMID:29494551

  19. [Microinjection Monitoring System Design Applied to MRI Scanning].

    PubMed

    Xu, Yongfeng

    2017-09-30

    A microinjection monitoring system applied to the MRI scanning was introduced. The micro camera probe was used to stretch into the main magnet for real-time video injection monitoring of injection tube terminal. The programming based on LabVIEW was created to analysis and process the real-time video information. The feedback signal was used for intelligent controlling of the modified injection pump. The real-time monitoring system can make the best use of injection under the condition that the injection device was away from the sample which inside the magnetic room and unvisible. 9.4 T MRI scanning experiment showed that the system in ultra-high field can work stability and doesn't affect the MRI scans.

  20. A Near-Optimal Distributed QoS Constrained Routing Algorithm for Multichannel Wireless Sensor Networks

    PubMed Central

    Lin, Frank Yeong-Sung; Hsiao, Chiu-Han; Yen, Hong-Hsu; Hsieh, Yu-Jen

    2013-01-01

    One of the important applications in Wireless Sensor Networks (WSNs) is video surveillance that includes the tasks of video data processing and transmission. Processing and transmission of image and video data in WSNs has attracted a lot of attention in recent years. This is known as Wireless Visual Sensor Networks (WVSNs). WVSNs are distributed intelligent systems for collecting image or video data with unique performance, complexity, and quality of service challenges. WVSNs consist of a large number of battery-powered and resource constrained camera nodes. End-to-end delay is a very important Quality of Service (QoS) metric for video surveillance application in WVSNs. How to meet the stringent delay QoS in resource constrained WVSNs is a challenging issue that requires novel distributed and collaborative routing strategies. This paper proposes a Near-Optimal Distributed QoS Constrained (NODQC) routing algorithm to achieve an end-to-end route with lower delay and higher throughput. A Lagrangian Relaxation (LR)-based routing metric that considers the “system perspective” and “user perspective” is proposed to determine the near-optimal routing paths that satisfy end-to-end delay constraints with high system throughput. The empirical results show that the NODQC routing algorithm outperforms others in terms of higher system throughput with lower average end-to-end delay and delay jitter. In this paper, for the first time, the algorithm shows how to meet the delay QoS and at the same time how to achieve higher system throughput in stringently resource constrained WVSNs.

  1. Parallel processing approach to transform-based image coding

    NASA Astrophysics Data System (ADS)

    Normile, James O.; Wright, Dan; Chu, Ken; Yeh, Chia L.

    1991-06-01

    This paper describes a flexible parallel processing architecture designed for use in real time video processing. The system consists of floating point DSP processors connected to each other via fast serial links, each processor has access to a globally shared memory. A multiple bus architecture in combination with a dual ported memory allows communication with a host control processor. The system has been applied to prototyping of video compression and decompression algorithms. The decomposition of transform based algorithms for decompression into a form suitable for parallel processing is described. A technique for automatic load balancing among the processors is developed and discussed, results ar presented with image statistics and data rates. Finally techniques for accelerating the system throughput are analyzed and results from the application of one such modification described.

  2. Complex effusive events at Kilauea as documented by the GOES satellite and remote video cameras

    USGS Publications Warehouse

    Harris, A.J.L.; Thornber, C.R.

    1999-01-01

    GOES provides thermal data for all of the Hawaiian volcanoes once every 15 min. We show how volcanic radiance time series produced from this data stream can be used as a simple measure of effusive activity. Two types of radiance trends in these time series can be used to monitor effusive activity: (a) Gradual variations in radiance reveal steady flow-field extension and tube development. (b) Discrete spikes correlate with short bursts of activity, such as lava fountaining or lava-lake overflows. We are confident that any effusive event covering more than 10,000 m2 of ground in less than 60 min will be unambiguously detectable using this approach. We demonstrate this capability using GOES, video camera and ground-based observational data for the current eruption of Kilauea volcano (Hawai'i). A GOES radiance time series was constructed from 3987 images between 19 June and 12 August 1997. This time series displayed 24 radiance spikes elevated more than two standard deviations above the mean; 19 of these are correlated with video-recorded short-burst effusive events. Less ambiguous events are interpreted, assessed and related to specific volcanic events by simultaneous use of permanently recording video camera data and ground-observer reports. The GOES radiance time series are automatically processed on data reception and made available in near-real-time, so such time series can contribute to three main monitoring functions: (a) automatically alerting major effusive events; (b) event confirmation and assessment; and (c) establishing effusive event chronology.

  3. Accidental Turbulent Discharge Rate Estimation from Videos

    NASA Astrophysics Data System (ADS)

    Ibarra, Eric; Shaffer, Franklin; Savaş, Ömer

    2015-11-01

    A technique to estimate the volumetric discharge rate in accidental oil releases using high speed video streams is described. The essence of the method is similar to PIV processing, however the cross correlation is carried out on the visible features of the efflux, which are usually turbulent, opaque and immiscible. The key step in the process is to perform a pixelwise time filtering on the video stream, in which the parameters are commensurate with the scales of the large eddies. The velocity field extracted from the shell of visible features is then used to construct an approximate velocity profile within the discharge. The technique has been tested on laboratory experiments using both water and oil jets at Re ~105 . The technique is accurate to 20%, which is sufficient for initial responders to deploy adequate resources for containment. The software package requires minimal user input and is intended for deployment on an ROV in the field. Supported by DOI via NETL.

  4. Real-time heart rate measurement for multi-people using compressive tracking

    NASA Astrophysics Data System (ADS)

    Liu, Lingling; Zhao, Yuejin; Liu, Ming; Kong, Lingqin; Dong, Liquan; Ma, Feilong; Pang, Zongguang; Cai, Zhi; Zhang, Yachu; Hua, Peng; Yuan, Ruifeng

    2017-09-01

    The rise of aging population has created a demand for inexpensive, unobtrusive, automated health care solutions. Image PhotoPlethysmoGraphy(IPPG) aids in the development of these solutions by allowing for the extraction of physiological signals from video data. However, the main deficiencies of the recent IPPG methods are non-automated, non-real-time and susceptible to motion artifacts(MA). In this paper, a real-time heart rate(HR) detection method for multiple subjects simultaneously was proposed and realized using the open computer vision(openCV) library, which consists of getting multiple subjects' facial video automatically through a Webcam, detecting the region of interest (ROI) in the video, reducing the false detection rate by our improved Adaboost algorithm, reducing the MA by our improved compress tracking(CT) algorithm, wavelet noise-suppression algorithm for denoising and multi-threads for higher detection speed. For comparison, HR was measured simultaneously using a medical pulse oximetry device for every subject during all sessions. Experimental results on a data set of 30 subjects show that the max average absolute error of heart rate estimation is less than 8 beats per minute (BPM), and the processing speed of every frame has almost reached real-time: the experiments with video recordings of ten subjects under the condition of the pixel resolution of 600× 800 pixels show that the average HR detection time of 10 subjects was about 17 frames per second (fps).

  5. Applying emerging digital video interface standards to airborne avionics sensor and digital map integrations: benefits outweigh the initial costs

    NASA Astrophysics Data System (ADS)

    Kuehl, C. Stephen

    1996-06-01

    Video signal system performance can be compromised in a military aircraft cockpit management system (CMS) with the tailoring of vintage Electronics Industries Association (EIA) RS170 and RS343A video interface standards. Video analog interfaces degrade when induced system noise is present. Further signal degradation has been traditionally associated with signal data conversions between avionics sensor outputs and the cockpit display system. If the CMS engineering process is not carefully applied during the avionics video and computing architecture development, extensive and costly redesign will occur when visual sensor technology upgrades are incorporated. Close monitoring and technical involvement in video standards groups provides the knowledge-base necessary for avionic systems engineering organizations to architect adaptable and extendible cockpit management systems. With the Federal Communications Commission (FCC) in the process of adopting the Digital HDTV Grand Alliance System standard proposed by the Advanced Television Systems Committee (ATSC), the entertainment and telecommunications industries are adopting and supporting the emergence of new serial/parallel digital video interfaces and data compression standards that will drastically alter present NTSC-M video processing architectures. The re-engineering of the U.S. Broadcasting system must initially preserve the electronic equipment wiring networks within broadcast facilities to make the transition to HDTV affordable. International committee activities in technical forums like ITU-R (former CCIR), ANSI/SMPTE, IEEE, and ISO/IEC are establishing global consensus on video signal parameterizations that support a smooth transition from existing analog based broadcasting facilities to fully digital computerized systems. An opportunity exists for implementing these new video interface standards over existing video coax/triax cabling in military aircraft cockpit management systems. Reductions in signal conversion processing steps, major improvement in video noise reduction, and an added capability to pass audio/embedded digital data within the digital video signal stream are the significant performance increases associated with the incorporation of digital video interface standards. By analyzing the historical progression of military CMS developments, establishing a systems engineering process for CMS design, tracing the commercial evolution of video signal standardization, adopting commercial video signal terminology/definitions, and comparing/contrasting CMS architecture modifications using digital video interfaces; this paper provides a technical explanation on how a systems engineering process approach to video interface standardization can result in extendible and affordable cockpit management systems.

  6. Optically phase-locked electronic speckle pattern interferometer

    NASA Astrophysics Data System (ADS)

    Moran, Steven E.; Law, Robert; Craig, Peter N.; Goldberg, Warren M.

    1987-02-01

    The design, theory, operation, and characteristics of an optically phase-locked electronic speckle pattern interferometer (OPL-ESPI) are described. The OPL-ESPI system couples an optical phase-locked loop with an ESPI system to generate real-time equal Doppler speckle contours of moving objects from unstable sensor platforms. In addition, the optical phase-locked loop provides the basis for a new ESPI video signal processing technique which incorporates local oscillator phase shifting coupled with video sequential frame subtraction.

  7. Time-Lapse and Slow-Motion Tracking of Temperature Changes: Response Time of a Thermometer

    ERIC Educational Resources Information Center

    Moggio, L.; Onorato, P.; Gratton, L. M.; Oss, S.

    2017-01-01

    We propose the use of a smartphone based time-lapse and slow-motion video techniques together with tracking analysis as valuable tools for investigating thermal processes such as the response time of a thermometer. The two simple experimental activities presented here, suitable also for high school and undergraduate students, allow one to measure…

  8. Associations between active video gaming and other energy-balance related behaviours in adolescents: a 24-hour recall diary study.

    PubMed

    Simons, Monique; Chinapaw, Mai J M; Brug, Johannes; Seidell, Jaap; de Vet, Emely

    2015-03-05

    Active video games may contribute to reducing time spent in sedentary activities, increasing physical activity and preventing excessive weight gain in adolescents. Active video gaming can, however, only be beneficial for weight management when it replaces sedentary activities and not other physical activity, and when it is not associated with a higher energy intake. The current study therefore examines the association between active video gaming and other energy-balance-related behaviours (EBRBs). Adolescents (12-16 years) with access to an active video game and who reported to spend at least one hour per week in active video gaming were invited to participate in the study. They were asked to complete electronic 24-hour recall diaries on five randomly assigned weekdays and two randomly assigned weekend-days in a one-month period, reporting on time spent playing active and non-active video games and on other EBRBs. Findings indicated that adolescents who reported playing active video games on assessed days also reported spending more time playing non-active video games (Median = 23.6, IQR = 56.8 minutes per week) compared to adolescents who did not report playing active video games on assessed days (Median = 10.0, IQR = 51.3 minutes per week, P < 0.001 (Mann Whitney test)). No differences between these groups were found in other EBRBs. Among those who played active video games on assessed days, active video game time was positively yet weakly associated with TV/DVD time and snack consumption. Active video game time was not significantly associated with other activities and sugar-sweetened beverages intake. The results suggest that it is unlikely that time spent by adolescents in playing active video games replaces time spent in other physically active behaviours or sedentary activities. Spending more time playing active video games does seem to be associated with a small, but significant increase in intake of snacks. This suggests that interventions aimed at increasing time spent on active video gaming, may have unexpected side effects, thus warranting caution.

  9. Twelve tips for reducing production time and increasing long-term usability of instructional video.

    PubMed

    Norman, Marie K

    2017-08-01

    The use of instructional video is increasing across all disciplines and levels of education. Although video has a number of distinct advantages for course delivery and student learning, it can also be time-consuming and resource-intensive to produce, which imposes a burden on busy faculty. With video poised to play a larger role in medical education, we need strategies for streamlining video production and ensuring that the video we produce is of lasting value. This article draws on learning research and best practices in educational technology, along with the author's experience in online education and video production. It offers 12 practical tips for reducing the initial time investment in video production and creating video that can be reused long into the future. These tips can help faculty and departments create high-quality instructional video while using their time and resources more wisely.

  10. Is Video-Based Education an Effective Method in Surgical Education? A Systematic Review.

    PubMed

    Ahmet, Akgul; Gamze, Kus; Rustem, Mustafaoglu; Sezen, Karaborklu Argut

    2018-02-12

    Visual signs draw more attention during the learning process. Video is one of the most effective tool including a lot of visual cues. This systematic review set out to explore the influence of video in surgical education. We reviewed the current evidence for the video-based surgical education methods, discuss the advantages and disadvantages on the teaching of technical and nontechnical surgical skills. This systematic review was conducted according to the guidelines defined in the preferred reporting items for systematic reviews and meta-analyses statement. The electronic databases: the Cochrane Library, Medline (PubMED), and ProQuest were searched from their inception to the 30 January 2016. The Medical Subject Headings (MeSH) terms and keywords used were "video," "education," and "surgery." We analyzed all full-texts, randomised and nonrandomised clinical trials and observational studies including video-based education methods about any surgery. "Education" means a medical resident's or student's training and teaching process; not patients' education. We did not impose restrictions about language or publication date. A total of nine articles which met inclusion criteria were included. These trials enrolled 507 participants and the total number of participants per trial ranged from 10 to 172. Nearly all of the studies reviewed report significant knowledge gain from video-based education techniques. The findings of this systematic review provide fair to good quality studies to demonstrate significant gains in knowledge compared with traditional teaching. Additional video to simulator exercise or 3D animations has beneficial effects on training time, learning duration, acquisition of surgical skills, and trainee's satisfaction. Video-based education has potential for use in surgical education as trainees face significant barriers in their practice. This method is effective according to the recent literature. Video should be used in addition to standard techniques in the surgical education. Copyright © 2018 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  11. A novel Kalman filter based video image processing scheme for two-photon fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Sun, Wenqing; Huang, Xia; Li, Chunqiang; Xiao, Chuan; Qian, Wei

    2016-03-01

    Two-photon fluorescence microscopy (TPFM) is a perfect optical imaging equipment to monitor the interaction between fast moving viruses and hosts. However, due to strong unavoidable background noises from the culture, videos obtained by this technique are too noisy to elaborate this fast infection process without video image processing. In this study, we developed a novel scheme to eliminate background noises, recover background bacteria images and improve video qualities. In our scheme, we modified and implemented the following methods for both host and virus videos: correlation method, round identification method, tree-structured nonlinear filters, Kalman filters, and cell tracking method. After these procedures, most of noises were eliminated and host images were recovered with their moving directions and speed highlighted in the videos. From the analysis of the processed videos, 93% bacteria and 98% viruses were correctly detected in each frame on average.

  12. Video Monitoring a Simulation-Based Quality Improvement Program in Bihar, India.

    PubMed

    Dyer, Jessica; Spindler, Hilary; Christmas, Amelia; Shah, Malay Bharat; Morgan, Melissa; Cohen, Susanna R; Sterne, Jason; Mahapatra, Tanmay; Walker, Dilys

    2018-04-01

    Simulation-based training has become an accepted clinical training andragogy in high-resource settings with its use increasing in low-resource settings. Video recordings of simulated scenarios are commonly used by facilitators. Beyond using the videos during debrief sessions, researchers can also analyze the simulation videos to quantify technical and nontechnical skills during simulated scenarios over time. Little is known about the feasibility and use of large-scale systems to video record and analyze simulation and debriefing data for monitoring and evaluation in low-resource settings. This manuscript describes the process of designing and implementing a large-scale video monitoring system. Mentees and Mentors were consented and all simulations and debriefs conducted at 320 Primary Health Centers (PHCs) were video recorded. The system design, number of video recordings, and inter-rater reliability of the coded videos were assessed. The final dataset included a total of 11,278 videos. Overall, a total of 2,124 simulation videos were coded and 183 (12%) were blindly double-coded. For the double-coded sample, the average inter-rater reliability (IRR) scores were 80% for nontechnical skills, and 94% for clinical technical skills. Among 4,450 long debrief videos received, 216 were selected for coding and all were double-coded. Data quality of simulation videos was found to be very good in terms of recorded instances of "unable to see" and "unable to hear" in Phases 1 and 2. This study demonstrates that video monitoring systems can be effectively implemented at scale in resource limited settings. Further, video monitoring systems can play several vital roles within program implementation, including monitoring and evaluation, provision of actionable feedback to program implementers, and assurance of program fidelity.

  13. Real-time image processing for passive mmW imagery

    NASA Astrophysics Data System (ADS)

    Kozacik, Stephen; Paolini, Aaron; Bonnett, James; Harrity, Charles; Mackrides, Daniel; Dillon, Thomas E.; Martin, Richard D.; Schuetz, Christopher A.; Kelmelis, Eric; Prather, Dennis W.

    2015-05-01

    The transmission characteristics of millimeter waves (mmWs) make them suitable for many applications in defense and security, from airport preflight scanning to penetrating degraded visual environments such as brownout or heavy fog. While the cold sky provides sufficient illumination for these images to be taken passively in outdoor scenarios, this utility comes at a cost; the diffraction limit of the longer wavelengths involved leads to lower resolution imagery compared to the visible or IR regimes, and the low power levels inherent to passive imagery allow the data to be more easily degraded by noise. Recent techniques leveraging optical upconversion have shown significant promise, but are still subject to fundamental limits in resolution and signal-to-noise ratio. To address these issues we have applied techniques developed for visible and IR imagery to decrease noise and increase resolution in mmW imagery. We have developed these techniques into fieldable software, making use of GPU platforms for real-time operation of computationally complex image processing algorithms. We present data from a passive, 77 GHz, distributed aperture, video-rate imaging platform captured during field tests at full video rate. These videos demonstrate the increase in situational awareness that can be gained through applying computational techniques in real-time without needing changes in detection hardware.

  14. A Secure and Robust Compressed Domain Video Steganography for Intra- and Inter-Frames Using Embedding-Based Byte Differencing (EBBD) Scheme

    PubMed Central

    Idbeaa, Tarik; Abdul Samad, Salina; Husain, Hafizah

    2016-01-01

    This paper presents a novel secure and robust steganographic technique in the compressed video domain namely embedding-based byte differencing (EBBD). Unlike most of the current video steganographic techniques which take into account only the intra frames for data embedding, the proposed EBBD technique aims to hide information in both intra and inter frames. The information is embedded into a compressed video by simultaneously manipulating the quantized AC coefficients (AC-QTCs) of luminance components of the frames during MPEG-2 encoding process. Later, during the decoding process, the embedded information can be detected and extracted completely. Furthermore, the EBBD basically deals with two security concepts: data encryption and data concealing. Hence, during the embedding process, secret data is encrypted using the simplified data encryption standard (S-DES) algorithm to provide better security to the implemented system. The security of the method lies in selecting candidate AC-QTCs within each non-overlapping 8 × 8 sub-block using a pseudo random key. Basic performance of this steganographic technique verified through experiments on various existing MPEG-2 encoded videos over a wide range of embedded payload rates. Overall, the experimental results verify the excellent performance of the proposed EBBD with a better trade-off in terms of imperceptibility and payload, as compared with previous techniques while at the same time ensuring minimal bitrate increase and negligible degradation of PSNR values. PMID:26963093

  15. A Secure and Robust Compressed Domain Video Steganography for Intra- and Inter-Frames Using Embedding-Based Byte Differencing (EBBD) Scheme.

    PubMed

    Idbeaa, Tarik; Abdul Samad, Salina; Husain, Hafizah

    2016-01-01

    This paper presents a novel secure and robust steganographic technique in the compressed video domain namely embedding-based byte differencing (EBBD). Unlike most of the current video steganographic techniques which take into account only the intra frames for data embedding, the proposed EBBD technique aims to hide information in both intra and inter frames. The information is embedded into a compressed video by simultaneously manipulating the quantized AC coefficients (AC-QTCs) of luminance components of the frames during MPEG-2 encoding process. Later, during the decoding process, the embedded information can be detected and extracted completely. Furthermore, the EBBD basically deals with two security concepts: data encryption and data concealing. Hence, during the embedding process, secret data is encrypted using the simplified data encryption standard (S-DES) algorithm to provide better security to the implemented system. The security of the method lies in selecting candidate AC-QTCs within each non-overlapping 8 × 8 sub-block using a pseudo random key. Basic performance of this steganographic technique verified through experiments on various existing MPEG-2 encoded videos over a wide range of embedded payload rates. Overall, the experimental results verify the excellent performance of the proposed EBBD with a better trade-off in terms of imperceptibility and payload, as compared with previous techniques while at the same time ensuring minimal bitrate increase and negligible degradation of PSNR values.

  16. Lightning attachment process to common buildings

    NASA Astrophysics Data System (ADS)

    Saba, M. M. F.; Paiva, A. R.; Schumann, C.; Ferro, M. A. S.; Naccarato, K. P.; Silva, J. C. O.; Siqueira, F. V. C.; Custódio, D. M.

    2017-05-01

    The physical mechanism of lightning attachment to grounded structures is one of the most important issues in lightning physics research, and it is the basis for the design of the lightning protection systems. Most of what is known about the attachment process comes from leader propagation models that are mostly based on laboratory observations of long electrical discharges or from observations of lightning attachment to tall structures. In this paper we use high-speed videos to analyze the attachment process of downward lightning flashes to an ordinary residential building. For the first time, we present characteristics of the attachment process to common structures that are present in almost every city (in this case, two buildings under 60 m in São Paulo City, Brazil). Parameters like striking distance and connecting leaders speed, largely used in lightning attachment models and in lightning protection standards, are revealed in this work.Plain Language SummarySince the time of Benjamin Franklin, no one has ever recorded high-speed video images of a lightning connection to a common building. It is very difficult to do it. Cameras need to be very close to the structure chosen to be observed, and long observation time is required to register one lightning strike to that particular structure. Models and theories used to determine the zone of protection of a lightning rod have been developed, but they all suffer from the lack of field data. The submitted manuscript provides results from high-speed video observations of lightning attachment to low buildings that are commonly found in almost every populated area around the world. The proximity of the camera and the high frame rate allowed us to see interesting details that will improve the understanding of the attachment process and, consequently, the models and theories used by lightning protection standards. This paper also presents spectacular images and videos of lightning flashes connecting lightning rods that will be of interest not only to the lightning physics scientific community and to engineers that struggle with lightning protection but also to all those who want to understand how a lightning rod works.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015SPIE.9400E..06R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015SPIE.9400E..06R"><span>2D to 3D conversion implemented in different hardware</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ramos-Diaz, Eduardo; Gonzalez-Huitron, Victor; Ponomaryov, Volodymyr I.; Hernandez-Fragoso, Araceli</p> <p>2015-02-01</p> <p>Conversion of available 2D data for release in 3D content is a hot topic for providers and for success of the 3D applications, in general. It naturally completely relies on virtual view synthesis of a second view given by original 2D video. Disparity map (DM) estimation is a central task in 3D generation but still follows a very difficult problem for rendering novel images precisely. There exist different approaches in DM reconstruction, among them manually and semiautomatic methods that can produce high quality DMs but they demonstrate hard time consuming and are computationally expensive. In this paper, several hardware implementations of designed frameworks for an automatic 3D color video generation based on 2D real video sequence are proposed. The novel framework includes simultaneous processing of stereo pairs using the following blocks: CIE L*a*b* color space conversions, stereo matching via pyramidal scheme, color segmentation by k-means on an a*b* color plane, and adaptive post-filtering, DM estimation using stereo matching between left and right images (or neighboring frames in a video), adaptive post-filtering, and finally, the anaglyph 3D scene generation. Novel technique has been implemented on DSP TMS320DM648, Matlab's Simulink module over a PC with Windows 7, and using graphic card (NVIDIA Quadro K2000) demonstrating that the proposed approach can be applied in real-time processing mode. The time values needed, mean Similarity Structural Index Measure (SSIM) and Bad Matching Pixels (B) values for different hardware implementations (GPU, Single CPU, and DSP) are exposed in this paper.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4721801','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4721801"><span>Subjective Quality Assessment of Underwater Video for Scientific Applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Moreno-Roldán, José-Miguel; Luque-Nieto, Miguel-Ángel; Poncela, Javier; Díaz-del-Río, Víctor; Otero, Pablo</p> <p>2015-01-01</p> <p>Underwater video services could be a key application in the better scientific knowledge of the vast oceanic resources in our planet. However, limitations in the capacity of current available technology for underwater networks (UWSNs) raise the question of the feasibility of these services. When transmitting video, the main constraints are the limited bandwidth and the high propagation delays. At the same time the service performance depends on the needs of the target group. This paper considers the problems of estimations for the Mean Opinion Score (a standard quality measure) in UWSNs based on objective methods and addresses the topic of quality assessment in potential underwater video services from a subjective point of view. The experimental design and the results of a test planned according standardized psychometric methods are presented. The subjects used in the quality assessment test were ocean scientists. Video sequences were recorded in actual exploration expeditions and were processed to simulate conditions similar to those that might be found in UWSNs. Our experimental results show how videos are considered to be useful for scientific purposes even in very low bitrate conditions. PMID:26694400</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26250075','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26250075"><span>Capturing and displaying microscopic images used in medical diagnostics and forensic science using 4K video resolution - an application in higher education.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Maier, Hans; de Heer, Gert; Ortac, Ajda; Kuijten, Jan</p> <p>2015-11-01</p> <p>To analyze, interpret and evaluate microscopic images, used in medical diagnostics and forensic science, video images for educational purposes were made with a very high resolution of 4096 × 2160 pixels (4K), which is four times as many pixels as High-Definition Video (1920 × 1080 pixels). The unprecedented high resolution makes it possible to see details that remain invisible to any other video format. The images of the specimens (blood cells, tissue sections, hair, fibre, etc.) are recorded using a 4K video camera which is attached to a light microscope. After processing, this resulted in very sharp and highly detailed images. This material was then used in education for classroom discussion. Spoken explanation by experts in the field of medical diagnostics and forensic science was also added to the high-resolution video images to make it suitable for self-study. © 2015 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4862809','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4862809"><span>Video Transmission for Third Generation Wireless Communication Systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Gharavi, H.; Alamouti, S. M.</p> <p>2001-01-01</p> <p>This paper presents a twin-class unequal protected video transmission system over wireless channels. Video partitioning based on a separation of the Variable Length Coded (VLC) Discrete Cosine Transform (DCT) coefficients within each block is considered for constant bitrate transmission (CBR). In the splitting process the fraction of bits assigned to each of the two partitions is adjusted according to the requirements of the unequal error protection scheme employed. Subsequently, partitioning is applied to the ITU-T H.263 coding standard. As a transport vehicle, we have considered one of the leading third generation cellular radio standards known as WCDMA. A dual-priority transmission system is then invoked on the WCDMA system where the video data, after being broken into two streams, is unequally protected. We use a very simple error correction coding scheme for illustration and then propose more sophisticated forms of unequal protection of the digitized video signals. We show that this strategy results in a significantly higher quality of the reconstructed video data when it is transmitted over time-varying multipath fading channels. PMID:27500033</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_12 --> <div id="page_13" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="241"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26694400','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26694400"><span>Subjective Quality Assessment of Underwater Video for Scientific Applications.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Moreno-Roldán, José-Miguel; Luque-Nieto, Miguel-Ángel; Poncela, Javier; Díaz-del-Río, Víctor; Otero, Pablo</p> <p>2015-12-15</p> <p>Underwater video services could be a key application in the better scientific knowledge of the vast oceanic resources in our planet. However, limitations in the capacity of current available technology for underwater networks (UWSNs) raise the question of the feasibility of these services. When transmitting video, the main constraints are the limited bandwidth and the high propagation delays. At the same time the service performance depends on the needs of the target group. This paper considers the problems of estimations for the Mean Opinion Score (a standard quality measure) in UWSNs based on objective methods and addresses the topic of quality assessment in potential underwater video services from a subjective point of view. The experimental design and the results of a test planned according standardized psychometric methods are presented. The subjects used in the quality assessment test were ocean scientists. Video sequences were recorded in actual exploration expeditions and were processed to simulate conditions similar to those that might be found in UWSNs. Our experimental results show how videos are considered to be useful for scientific purposes even in very low bitrate conditions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18435246','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18435246"><span>[Development of a video image system for wireless capsule endoscopes based on DSP].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yang, Li; Peng, Chenglin; Wu, Huafeng; Zhao, Dechun; Zhang, Jinhua</p> <p>2008-02-01</p> <p>A video image recorder to record video picture for wireless capsule endoscopes was designed. TMS320C6211 DSP of Texas Instruments Inc. is the core processor of this system. Images are periodically acquired from Composite Video Broadcast Signal (CVBS) source and scaled by video decoder (SAA7114H). Video data is transported from high speed buffer First-in First-out (FIFO) to Digital Signal Processor (DSP) under the control of Complex Programmable Logic Device (CPLD). This paper adopts JPEG algorithm for image coding, and the compressed data in DSP was stored to Compact Flash (CF) card. TMS320C6211 DSP is mainly used for image compression and data transporting. Fast Discrete Cosine Transform (DCT) algorithm and fast coefficient quantization algorithm are used to accelerate operation speed of DSP and decrease the executing code. At the same time, proper address is assigned for each memory, which has different speed;the memory structure is also optimized. In addition, this system uses plenty of Extended Direct Memory Access (EDMA) to transport and process image data, which results in stable and high performance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008SPIE.7110E..0MM','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008SPIE.7110E..0MM"><span>Integrated remotely sensed datasets for disaster management</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>McCarthy, Timothy; Farrell, Ronan; Curtis, Andrew; Fotheringham, A. Stewart</p> <p>2008-10-01</p> <p>Video imagery can be acquired from aerial, terrestrial and marine based platforms and has been exploited for a range of remote sensing applications over the past two decades. Examples include coastal surveys using aerial video, routecorridor infrastructures surveys using vehicle mounted video cameras, aerial surveys over forestry and agriculture, underwater habitat mapping and disaster management. Many of these video systems are based on interlaced, television standards such as North America's NTSC and European SECAM and PAL television systems that are then recorded using various video formats. This technology has recently being employed as a front-line, remote sensing technology for damage assessment post-disaster. This paper traces the development of spatial video as a remote sensing tool from the early 1980s to the present day. The background to a new spatial-video research initiative based at National University of Ireland, Maynooth, (NUIM) is described. New improvements are proposed and include; low-cost encoders, easy to use software decoders, timing issues and interoperability. These developments will enable specialists and non-specialists collect, process and integrate these datasets within minimal support. This integrated approach will enable decision makers to access relevant remotely sensed datasets quickly and so, carry out rapid damage assessment during and post-disaster.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009homf.book..197Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009homf.book..197Y"><span>Cross-Modal Approach for Karaoke Artifacts Correction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yan, Wei-Qi; Kankanhalli, Mohan S.</p> <p></p> <p>In this chapter, we combine adaptive sampling in conjunction with video analogies (VA) to correct the audio stream in the karaoke environment κ= {κ (t) : κ (t) = (U(t), K(t)), t in ({t}s, {t}e)} where t s and t e are start time and end time respectively, U(t) is the user multimedia data. We employ multiple streams from the karaoke data K(t) = ({K}_{V }(t), {K}M(t), {K}S(t)), where K V (t), K M (t) and K S (t) are the video, musical accompaniment and original singer's rendition respectively along with the user multimedia data U(t) = ({U}A(t),{U}_{V }(t)) where U V (t) is the user video captured with a camera and U A (t) is the user's rendition of the song. We analyze the audio and video streaming features Ψ (κ ) = {Ψ (U(t), K(t))} = {Ψ (U(t)), Ψ (K(t))} = {{Ψ }U(t), {Ψ }K(t)}, to produce the corrected singing, namely output U '(t), which is made as close as possible to the original singer's rendition. Note that Ψ represents any kind of feature processing.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009hmde.book..197Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009hmde.book..197Y"><span>Cross-Modal Approach for Karaoke Artifacts Correction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yan, Wei-Qi; Kankanhalli, Mohan S.</p> <p></p> <p>In this chapter, we combine adaptive sampling in conjunction with video analogies (VA) to correct the audio stream in the karaoke environment kappa= {kappa (t) : kappa (t) = (U(t), K(t)), t in ({t}s, {t}e)} where t s and t e are start time and end time respectively, U(t) is the user multimedia data. We employ multiple streams from the karaoke data K(t) = ({K}_{V }(t), {K}M(t), {K}S(t)), where K V (t), K M (t) and K S (t) are the video, musical accompaniment and original singer's rendition respectively along with the user multimedia data U(t) = ({U}A(t),{U}_{V }(t)) where U V (t) is the user video captured with a camera and U A (t) is the user's rendition of the song. We analyze the audio and video streaming features Ψ (kappa ) = {Ψ (U(t), K(t))} = {Ψ (U(t)), Ψ (K(t))} = {{Ψ }U(t), {Ψ }K(t)}, to produce the corrected singing, namely output U ' (t), which is made as close as possible to the original singer's rendition. Note that Ψ represents any kind of feature processing.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=digital+AND+signal+AND+processing&pg=3&id=EJ321498','ERIC'); return false;" href="https://eric.ed.gov/?q=digital+AND+signal+AND+processing&pg=3&id=EJ321498"><span>Microcomputer-Based Digital Signal Processing Laboratory Experiments.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Tinari, Jr., Rocco; Rao, S. Sathyanarayan</p> <p>1985-01-01</p> <p>Describes a system (Apple II microcomputer interfaced to flexible, custom-designed digital hardware) which can provide: (1) Fast Fourier Transform (FFT) computation on real-time data with a video display of spectrum; (2) frequency synthesis experiments using the inverse FFT; and (3) real-time digital filtering experiments. (JN)</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003SPIE.4948..195W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003SPIE.4948..195W"><span>4DCAPTURE: a general purpose software package for capturing and analyzing two- and three-dimensional motion data acquired from video sequences</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Walton, James S.; Hodgson, Peter; Hallamasek, Karen; Palmer, Jake</p> <p>2003-07-01</p> <p>4DVideo is creating a general purpose capability for capturing and analyzing kinematic data from video sequences in near real-time. The core element of this capability is a software package designed for the PC platform. The software ("4DCapture") is designed to capture and manipulate customized AVI files that can contain a variety of synchronized data streams -- including audio, video, centroid locations -- and signals acquired from more traditional sources (such as accelerometers and strain gauges.) The code includes simultaneous capture or playback of multiple video streams, and linear editing of the images (together with the ancilliary data embedded in the files). Corresponding landmarks seen from two or more views are matched automatically, and photogrammetric algorithms permit multiple landmarks to be tracked in two- and three-dimensions -- with or without lens calibrations. Trajectory data can be processed within the main application or they can be exported to a spreadsheet where they can be processed or passed along to a more sophisticated, stand-alone, data analysis application. Previous attempts to develop such applications for high-speed imaging have been limited in their scope, or by the complexity of the application itself. 4DVideo has devised a friendly ("FlowStack") user interface that assists the end-user to capture and treat image sequences in a natural progression. 4DCapture employs the AVI 2.0 standard and DirectX technology which effectively eliminates the file size limitations found in older applications. In early tests, 4DVideo has streamed three RS-170 video sources to disk for more than an hour without loss of data. At this time, the software can acquire video sequences in three ways: (1) directly, from up to three hard-wired cameras supplying RS-170 (monochrome) signals; (2) directly, from a single camera or video recorder supplying an NTSC (color) signal; and (3) by importing existing video streams in the AVI 1.0 or AVI 2.0 formats. The latter is particularly useful for high-speed applications where the raw images are often captured and stored by the camera before being downloaded. Provision has been made to synchronize data acquired from any combination of these video sources using audio and visual "tags." Additional "front-ends," designed for digital cameras, are anticipated.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016mecs.conf..270L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016mecs.conf..270L"><span>Research of Pedestrian Crossing Safety Facilities Based on the Video Detection</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Sheng-Zhen; Xie, Quan-Long; Zang, Xiao-Dong; Tang, Guo-Jun</p> <p></p> <p>Since that the pedestrian crossing facilities at present is not perfect, pedestrian crossing is in chaos and pedestrians from opposite direction conflict and congest with each other, which severely affects the pedestrian traffic efficiency, obstructs the vehicle and bringing about some potential security problems. To solve these problems, based on video identification, a pedestrian crossing guidance system was researched and designed. It uses the camera to monitor the pedestrians in real time and sums up the number of pedestrians through video detection program, and a group of pedestrian's induction lamp array is installed at the interval of crosswalk, which adjusts color display according to the proportion of pedestrians from both sides to guide pedestrians from both opposite directions processing separately. The emulation analysis result from cellular automaton shows that the system reduces the pedestrian crossing conflict, shortens the time of pedestrian crossing and improves the safety of pedestrians crossing.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2248019','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2248019"><span>Development of the cardiovascular system: an interactive video computer program.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Smolen, A. J.; Zeiset, G. E.; Beaston-Wimmer, P.</p> <p>1992-01-01</p> <p>The major aim of this project is to provide interactive video computer based courseware that can be used by the medical student and others to supplement his or her learning of this very important aspect of basic biomedical education. Embryology is a science that depends on the ability of the student to visualize dynamic changes in structure which occur in four dimensions--X, Y, Z, and time. Traditional didactic methods, including lectures employing photographic slides and laboratories employing histological sections, are limited to two dimensions--X and Y. The third spatial dimension and the dimension of time cannot be readily illustrated using these methods. Computer based learning, particularly when used in conjunction with interactive video, can be used effectively to illustrate developmental processes in all four dimensions. This methodology can also be used to foster the critical skills of independent learning and problem solving. PMID:1483013</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1988SPIE..849..191B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1988SPIE..849..191B"><span>Robotic Attention Processing And Its Application To Visual Guidance</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Barth, Matthew; Inoue, Hirochika</p> <p>1988-03-01</p> <p>This paper describes a method of real-time visual attention processing for robots performing visual guidance. This robot attention processing is based on a novel vision processor, the multi-window vision system that was developed at the University of Tokyo. The multi-window vision system is unique in that it only processes visual information inside local area windows. These local area windows are quite flexible in their ability to move anywhere on the visual screen, change their size and shape, and alter their pixel sampling rate. By using these windows for specific attention tasks, it is possible to perform high speed attention processing. The primary attention skills of detecting motion, tracking an object, and interpreting an image are all performed at high speed on the multi-window vision system. A basic robotic attention scheme using the attention skills was developed. The attention skills involved detection and tracking of salient visual features. The tracking and motion information thus obtained was utilized in producing the response to the visual stimulus. The response of the attention scheme was quick enough to be applicable to the real-time vision processing tasks of playing a video 'pong' game, and later using an automobile driving simulator. By detecting the motion of a 'ball' on a video screen and then tracking the movement, the attention scheme was able to control a 'paddle' in order to keep the ball in play. The response was faster than that of a human's, allowing the attention scheme to play the video game at higher speeds. Further, in the application to the driving simulator, the attention scheme was able to control both direction and velocity of a simulated vehicle following a lead car. These two applications show the potential of local visual processing in its use for robotic attention processing.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003SPIE.5150..449S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003SPIE.5150..449S"><span>Object detection in cinematographic video sequences for automatic indexing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Stauder, Jurgen; Chupeau, Bertrand; Oisel, Lionel</p> <p>2003-06-01</p> <p>This paper presents an object detection framework applied to cinematographic post-processing of video sequences. Post-processing is done after production and before editing. At the beginning of each shot of a video, a slate (also called clapperboard) is shown. The slate contains notably an electronic audio timecode that is necessary for audio-visual synchronization. This paper presents an object detection framework to detect slates in video sequences for automatic indexing and post-processing. It is based on five steps. The first two steps aim to reduce drastically the video data to be analyzed. They ensure high recall rate but have low precision. The first step detects images at the beginning of a shot possibly showing up a slate while the second step searches in these images for candidates regions with color distribution similar to slates. The objective is to not miss any slate while eliminating long parts of video without slate appearance. The third and fourth steps are statistical classification and pattern matching to detected and precisely locate slates in candidate regions. These steps ensure high recall rate and high precision. The objective is to detect slates with very little false alarms to minimize interactive corrections. In a last step, electronic timecodes are read from slates to automize audio-visual synchronization. The presented slate detector has a recall rate of 89% and a precision of 97,5%. By temporal integration, much more than 89% of shots in dailies are detected. By timecode coherence analysis, the precision can be raised too. Issues for future work are to accelerate the system to be faster than real-time and to extend the framework for several slate types.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20150016954','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20150016954"><span>Normalized Temperature Contrast Processing in Flash Infrared Thermography</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Koshti, Ajay M.</p> <p>2016-01-01</p> <p>The paper presents further development in normalized contrast processing of flash infrared thermography method by the author given in US 8,577,120 B1. The method of computing normalized image or pixel intensity contrast, and normalized temperature contrast are provided, including converting one from the other. Methods of assessing emissivity of the object, afterglow heat flux, reflection temperature change and temperature video imaging during flash thermography are provided. Temperature imaging and normalized temperature contrast imaging provide certain advantages over pixel intensity normalized contrast processing by reducing effect of reflected energy in images and measurements, providing better quantitative data. The subject matter for this paper mostly comes from US 9,066,028 B1 by the author. Examples of normalized image processing video images and normalized temperature processing video images are provided. Examples of surface temperature video images, surface temperature rise video images and simple contrast video images area also provided. Temperature video imaging in flash infrared thermography allows better comparison with flash thermography simulation using commercial software which provides temperature video as the output. Temperature imaging also allows easy comparison of surface temperature change to camera temperature sensitivity or noise equivalent temperature difference (NETD) to assess probability of detecting (POD) anomalies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011SPIE.7881E..05P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011SPIE.7881E..05P"><span>Quality and noise measurements in mobile phone video capture</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Petrescu, Doina; Pincenti, John</p> <p>2011-02-01</p> <p>The quality of videos captured with mobile phones has become increasingly important particularly since resolutions and formats have reached a level that rivals the capabilities available in the digital camcorder market, and since many mobile phones now allow direct playback on large HDTVs. The video quality is determined by the combined quality of the individual parts of the imaging system including the image sensor, the digital color processing, and the video compression, each of which has been studied independently. In this work, we study the combined effect of these elements on the overall video quality. We do this by evaluating the capture under various lighting, color processing, and video compression conditions. First, we measure full reference quality metrics between encoder input and the reconstructed sequence, where the encoder input changes with light and color processing modifications. Second, we introduce a system model which includes all elements that affect video quality, including a low light additive noise model, ISP color processing, as well as the video encoder. Our experiments show that in low light conditions and for certain choices of color processing the system level visual quality may not improve when the encoder becomes more capable or the compression ratio is reduced.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009SPIE.7341E..02B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009SPIE.7341E..02B"><span>An embedded processor for real-time atmoshperic compensation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bodnar, Michael R.; Curt, Petersen F.; Ortiz, Fernando E.; Carrano, Carmen J.; Kelmelis, Eric J.</p> <p>2009-05-01</p> <p>Imaging over long distances is crucial to a number of defense and security applications, such as homeland security and launch tracking. However, the image quality obtained from current long-range optical systems can be severely degraded by the turbulent atmosphere in the path between the region under observation and the imager. While this obscured image information can be recovered using post-processing techniques, the computational complexity of such approaches has prohibited deployment in real-time scenarios. To overcome this limitation, we have coupled a state-of-the-art atmospheric compensation algorithm, the average-bispectrum speckle method, with a powerful FPGA-based embedded processing board. The end result is a light-weight, lower-power image processing system that improves the quality of long-range imagery in real-time, and uses modular video I/O to provide a flexible interface to most common digital and analog video transport methods. By leveraging the custom, reconfigurable nature of the FPGA, a 20x speed increase over a modern desktop PC was achieved in a form-factor that is compact, low-power, and field-deployable.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24951881','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24951881"><span>Teaching methotrexate self-injection with a web-based video maintains patient care while reducing healthcare resources: a pilot study.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Katz, Steven J; Leung, Sylvia</p> <p>2015-01-01</p> <p>The aim of the study was to compare standard nurse-led methotrexate self-injection patient education to a web-based methotrexate self-injection education video in conjunction with standard teaching on patient self-confidence for self-injection, as well as patient satisfaction, patient knowledge and teaching time. Consecutive rheumatology patients seen for methotrexate self-injection education were enrolled. Prior to education, patient self-confidence for self-injection, age, gender and education were recorded. Patients were randomized 1:1 to standard teaching or the intervention: a 12-min methotrexate self-injection education video followed by further in-person nurse education. Patients recorded their post-education confidence for self-injection, satisfaction with the teaching process and answered four specific questions testing knowledge on methotrexate self-injection. The time spent providing direct education to the patient was recorded. Twenty-nine patients participated in this study: 15 had standard (C) teaching and 14 were in the intervention group (I). Average age, gender and education level were similar in both groups. Both groups were satisfied with the quality of teaching. There was no difference in pre-confidence (C = 5.5/10 vs. I = 4.7/10, p = 0.44) or post-confidence (C = 8.8, I = 8.8, p = 0.93) between the groups. There was a trend toward improved patient knowledge in the video group versus the standard group (C = 4.7/6, I = 5.5/6, p = 0.15). Nurse teaching time was less in the video group (C = 60 min, I = 44 min, p = 0.012), with men requiring longer education time than women across all groups. An education video may be a good supplement to standard in-person nurse teaching for methotrexate self-injection. It equals the standard teaching practise with regard to patient satisfaction, confidence and knowledge while decreasing teaching time by 25 %.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21346166','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21346166"><span>The effects of short interactive animation video information on preanesthetic anxiety, knowledge, and interview time: a randomized controlled trial.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kakinuma, Akihito; Nagatani, Hirokazu; Otake, Hiroshi; Mizuno, Ju; Nakata, Yoshinori</p> <p>2011-06-01</p> <p>We designed an interactive animated video that provides a basic explanation-including the risks, benefits, and alternatives-of anesthetic procedures. We hypothesized that this video would improve patient understanding of anesthesia, reduce anxiety, and shorten the interview time. Two hundred eleven patients scheduled for cancer surgery under general anesthesia or combined general and epidural anesthesia, who were admitted at least 1 day before the surgery, were randomly assigned to the video group (n = 106) or the no-video group (n = 105). The patients in the video group were asked to watch a short interactive animation video in the ward. After watching the video, the patients were visited by an anesthesiologist who performed a preanesthetic interview and routine risk assessment. The patients in the no-video group were also visited by an anesthesiologist, but were not asked to watch the video. In both groups, the patients were asked to complete the State-Trait Anxiety Inventory and a 14-point scale of knowledge test before the anesthesiologist's visit and on the day of surgery. We also measured interview time. There was no demographic difference between the 2 groups. The interview time was 34.4% shorter (video group, 12.2 ± 5.3 minutes, vs. no-video group, 18.6 ± 6.4 minutes; 95% confidence interval [CI] for the percentage reduction in time: 32.7%- 44.3%), and knowledge of anesthesia was 11.6% better in the video group (score 12.5 ± 1.4 vs. no-video group score 11.2 ± 1.7; 95% CI for the percentage increase in knowledge: 8.5%-13.9%). However, there was no difference in preanesthetic anxiety between the 2 groups. Our short interactive animation video helped patients' understanding of anesthesia and reduced anesthesiologists' interview time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21639659','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21639659"><span>Effects of video-based therapy preparation targeting experiential acceptance or the therapeutic alliance.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Johansen, Ayna B; Lumley, Mark; Cano, Annmarie</p> <p>2011-06-01</p> <p>Preparation for psychotherapy may enhance the psychotherapeutic process, reduce drop-outs, and improve outcomes, but the effective mechanisms of such preparation are poorly understood. Previous studies have rarely targeted specific processes that are associated with positive therapy outcomes. This randomized experiment compared the effects of preparatory videos that targeted either the Therapeutic Alliance, Experiential Acceptance, or a Control video on early therapeutic process variables in 105 patients seen in individual therapy. Participants watched the videos just before their first therapy session. No significant differences were found between the Alliance and Experiential Acceptance videos on patient recommendations, immediate affective reactions, or working alliance and attrition after the first session. However, the Therapeutic Alliance video produced an immediate increase in negative mood relative to the Control video, whereas the Experiential acceptance video produced a slight increase in positive mood relative to the Alliance video. Surprisingly, patients who viewed the Alliance video were rated significantly lower than the control group on therapist-rated alliance after the first session. These findings suggest there may be specific process effects in the early phase of treatment based on the type of pretraining material used, and also indicate that video-based pretraining efforts could be counterproductive. Furthermore, this research contributes to the literature by providing insights into methodological considerations for future work on the use of technology in psychotherapy and challenges associated with preparing people for successful psychotherapy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014SPIE.9074E..06A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014SPIE.9074E..06A"><span>Unattended real-time re-establishment of visibility in high dynamic range video and stills</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Abidi, B.</p> <p>2014-05-01</p> <p>We describe a portable unattended persistent surveillance system that corrects for harsh illumination conditions, where bright sun light creates mixed contrast effects, i.e., heavy shadows and washouts. These effects result in high dynamic range scenes, where illuminance can vary from few luxes to a 6 figure value. When using regular monitors and cameras, such wide span of illuminations can only be visualized if the actual range of values is compressed, leading to the creation of saturated and/or dark noisy areas and a loss of information in these areas. Images containing extreme mixed contrast cannot be fully enhanced from a single exposure, simply because all information is not present in the original data. The active intervention in the acquisition process is required. A software package, capable of integrating multiple types of COTS and custom cameras, ranging from Unmanned Aerial Systems (UAS) data links to digital single-lens reflex cameras (DSLR), is described. Hardware and software are integrated via a novel smart data acquisition algorithm, which communicates to the camera the parameters that would maximize information content in the final processed scene. A fusion mechanism is then applied to the smartly acquired data, resulting in an enhanced scene where information in both dark and bright areas is revealed. Multi-threading and parallel processing are exploited to produce automatic real time full motion corrected video. A novel enhancement algorithm was also devised to process data from legacy and non-controllable cameras. The software accepts and processes pre-recorded sequences and stills, enhances visible, night vision, and Infrared data, and successfully applies to night time and dark scenes. Various user options are available, integrating custom functionalities of the application into intuitive and easy to use graphical interfaces. The ensuing increase in visibility in surveillance video and intelligence imagery will expand the performance and timely decision making of the human analyst, as well as that of unmanned systems performing automatic data exploitation, such as target detection and identification.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2665907','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2665907"><span>Model-Based Analysis of Flow-Mediated Dilation and Intima-Media Thickness</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Bartoli, G.; Menegaz, G.; Lisi, M.; Di Stolfo, G.; Dragoni, S.; Gori, T.</p> <p>2008-01-01</p> <p>We present an end-to-end system for the automatic measurement of flow-mediated dilation (FMD) and intima-media thickness (IMT) for the assessment of the arterial function. The video sequences are acquired from a B-mode echographic scanner. A spline model (deformable template) is fitted to the data to detect the artery boundaries and track them all along the video sequence. The a priori knowledge about the image features and its content is exploited. Preprocessing is performed to improve both the visual quality of video frames for visual inspection and the performance of the segmentation algorithm without affecting the accuracy of the measurements. The system allows real-time processing as well as a high level of interactivity with the user. This is obtained by a graphical user interface (GUI) enabling the cardiologist to supervise the whole process and to eventually reset the contour extraction at any point in time. The system was validated and the accuracy, reproducibility, and repeatability of the measurements were assessed with extensive in vivo experiments. Jointly with the user friendliness, low cost, and robustness, this makes the system suitable for both research and daily clinical use. PMID:19360110</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1989nps..reptR....P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1989nps..reptR....P"><span>Image enhancement software for underwater recovery operations: User's manual</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Partridge, William J.; Therrien, Charles W.</p> <p>1989-06-01</p> <p>This report describes software for performing image enhancement on live or recorded video images. The software was developed for operational use during underwater recovery operations at the Naval Undersea Warfare Engineering Station. The image processing is performed on an IBM-PC/AT compatible computer equipped with hardware to digitize and display video images. The software provides the capability to provide contrast enhancement and other similar functions in real time through hardware lookup tables, to automatically perform histogram equalization, to capture one or more frames and average them or apply one of several different processing algorithms to a captured frame. The report is in the form of a user manual for the software and includes guided tutorial and reference sections. A Digital Image Processing Primer in the appendix serves to explain the principle concepts that are used in the image processing.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_13 --> <div id="page_14" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="261"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Artificial+AND+intelligent&pg=6&id=EJ840522','ERIC'); return false;" href="https://eric.ed.gov/?q=Artificial+AND+intelligent&pg=6&id=EJ840522"><span>The Time Factor: Leveraging Intelligent Agents and Directed Narratives in Online Learning Environments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Jones, Greg; Warren, Scott</p> <p>2009-01-01</p> <p>Using video games, virtual simulations, and other digital spaces for learning can be a time-consuming process; aside from technical issues that may absorb class time, students take longer to achieve gains in learning in virtual environments. Greg Jones and Scott Warren describe how intelligent agents, in-game characters that respond to the context…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28525349','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28525349"><span>Distracted driving on YouTube: implications for adolescents.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Basch, Corey H; Mouser, Christina; Clark, Ashley</p> <p>2017-05-18</p> <p>For the first time in 50 years, traffic fatalities have increased in the United States (US). With the emergence of technology, comes the possibility, that distracted driving has contributed to a decrease in safe driving practices. The purpose of this study was to describe the content on the popular video sharing site, YouTube to ascertain the type of content conveyed in videos that are widely viewed. The 100 most widely viewed English language videos were included in this sample, with a collective number of views of over 35 million. The majority of videos were television-based and Internet-based. Pairwise comparisons indicated that there were statistically significant differences between the number of views of consumer generated videos and television-based videos (p = 0.001) and between television-based videos and Internet-based videos (p < 0.001). Compared with consumer generated videos, television-based videos were 13 times more likely to discuss cell phone use as a distractor while driving, while Internet-based videos were 6.6 times more likely to discuss cell phone use as a distractor while driving. In addition, compared with consumer generated videos, television-based videos were 3.67 times more likely to discuss texting as a distractor while driving, whereas Internet-based videos were 8.5 times more likely to discuss texting as a distractor while driving. The findings of this study indicate that the videos on YouTube related to distracted driving are popular and that this medium could prove to be a successful venue to communicate information about this emergent public health issue.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1999SPIE.3846..371K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1999SPIE.3846..371K"><span>Thematic video indexing to support video database retrieval and query processing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Khoja, Shakeel A.; Hall, Wendy</p> <p>1999-08-01</p> <p>This paper presents a novel video database system, which caters for complex and long videos, such as documentaries, educational videos, etc. As compared to relatively structured format videos like CNN news or commercial advertisements, this database system has the capacity to work with long and unstructured videos.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012SPIE.8386E..04P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012SPIE.8386E..04P"><span>Convergence in full motion video processing, exploitation, and dissemination and activity based intelligence</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Phipps, Marja; Lewis, Gina</p> <p>2012-06-01</p> <p>Over the last decade, intelligence capabilities within the Department of Defense/Intelligence Community (DoD/IC) have evolved from ad hoc, single source, just-in-time, analog processing; to multi source, digitally integrated, real-time analytics; to multi-INT, predictive Processing, Exploitation and Dissemination (PED). Full Motion Video (FMV) technology and motion imagery tradecraft advancements have greatly contributed to Intelligence, Surveillance and Reconnaissance (ISR) capabilities during this timeframe. Imagery analysts have exploited events, missions and high value targets, generating and disseminating critical intelligence reports within seconds of occurrence across operationally significant PED cells. Now, we go beyond FMV, enabling All-Source Analysts to effectively deliver ISR information in a multi-INT sensor rich environment. In this paper, we explore the operational benefits and technical challenges of an Activity Based Intelligence (ABI) approach to FMV PED. Existing and emerging ABI features within FMV PED frameworks are discussed, to include refined motion imagery tools, additional intelligence sources, activity relevant content management techniques and automated analytics.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28378076','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28378076"><span>Training value of laparoscopic colorectal videos on the World Wide Web: a pilot study on the educational quality of laparoscopic right hemicolectomy videos.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Celentano, V; Browning, M; Hitchins, C; Giglio, M C; Coleman, M G</p> <p>2017-11-01</p> <p>Instructive laparoscopy videos with appropriate exposition could be ideal for initial training in laparoscopic surgery, but unfortunately there are no guidelines for annotating these videos or agreed methods to measure the educational content and the safety of the procedure presented. Aim of this study is to systematically search the World Wide Web to determine the availability of laparoscopic colorectal surgery videos and to objectively establish their potential training value. A search for laparoscopic right hemicolectomy videos was performed on the three most used English language web search engines Google.com, Bing.com, and Yahoo.com; moreover, a survey among 25 local trainees was performed to identify additional websites for inclusion. All laparoscopic right hemicolectomy videos with an English language title were included. Videos of open surgery, single incision laparoscopic surgery, robotic, and hand-assisted surgery were excluded. The safety of the demonstrated procedure was assessed with a validated competency assessment tool specifically designed for laparoscopic colorectal surgery and data on the educational content of the video were extracted. Thirty-one websites were identified and 182 surgical videos were included. One hundred and seventy-three videos (95%) detailed the year of publication; this demonstrated a significant increase in the number of videos published per year from 2009. Characteristics of the patient were rarely presented, only 10 videos (5.4%) reported operating time and only 6 videos (3.2%) reported 30-day morbidity; 34 videos (18.6%) underwent a peer-review process prior to publication. Formal case presentation, the presence of audio narration, the use of diagrams, and snapshots and a step-by-step approach are all characteristics of peer-reviewed videos but no significant difference was found in the safety of the procedure. Laparoscopic videos can be a useful adjunct to operative training. There is a large and increasing amount of material available for free on the internet, but this is currently unregulated.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1076723','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1076723"><span>Automated Thermal Image Processing for Detection and Classification of Birds and Bats - FY2012 Annual Report</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Duberstein, Corey A.; Matzner, Shari; Cullinan, Valerie I.</p> <p></p> <p>Surveying wildlife at risk from offshore wind energy development is difficult and expensive. Infrared video can be used to record birds and bats that pass through the camera view, but it is also time consuming and expensive to review video and determine what was recorded. We proposed to conduct algorithm and software development to identify and to differentiate thermally detected targets of interest that would allow automated processing of thermal image data to enumerate birds, bats, and insects. During FY2012 we developed computer code within MATLAB to identify objects recorded in video and extract attribute information that describes the objectsmore » recorded. We tested the efficiency of track identification using observer-based counts of tracks within segments of sample video. We examined object attributes, modeled the effects of random variability on attributes, and produced data smoothing techniques to limit random variation within attribute data. We also began drafting and testing methodology to identify objects recorded on video. We also recorded approximately 10 hours of infrared video of various marine birds, passerine birds, and bats near the Pacific Northwest National Laboratory (PNNL) Marine Sciences Laboratory (MSL) at Sequim, Washington. A total of 6 hours of bird video was captured overlooking Sequim Bay over a series of weeks. An additional 2 hours of video of birds was also captured during two weeks overlooking Dungeness Bay within the Strait of Juan de Fuca. Bats and passerine birds (swallows) were also recorded at dusk on the MSL campus during nine evenings. An observer noted the identity of objects viewed through the camera concurrently with recording. These video files will provide the information necessary to produce and test software developed during FY2013. The annotation will also form the basis for creation of a method to reliably identify recorded objects.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006SPIE.6055..208B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006SPIE.6055..208B"><span>Real-time rendering for multiview autostereoscopic displays</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Berretty, R.-P. M.; Peters, F. J.; Volleberg, G. T. G.</p> <p>2006-02-01</p> <p>In video systems, the introduction of 3D video might be the next revolution after the introduction of color. Nowadays multiview autostereoscopic displays are in development. Such displays offer various views at the same time and the image content observed by the viewer depends upon his position with respect to the screen. His left eye receives a signal that is different from what his right eye gets; this gives, provided the signals have been properly processed, the impression of depth. The various views produced on the display differ with respect to their associated camera positions. A possible video format that is suited for rendering from different camera positions is the usual 2D format enriched with a depth related channel, e.g. for each pixel in the video not only its color is given, but also e.g. its distance to a camera. In this paper we provide a theoretical framework for the parallactic transformations which relates captured and observed depths to screen and image disparities. Moreover we present an efficient real time rendering algorithm that uses forward mapping to reduce aliasing artefacts and that deals properly with occlusions. For improved perceived resolution, we take the relative position of the color subpixels and the optics of the lenticular screen into account. Sophisticated filtering techniques results in high quality images.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26825031','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26825031"><span>Healthcare Managers' Experiences of Leading the Implementation of Video Conferencing in Discharge Planning Sessions: An Interview Study.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hofflander, Malin; Nilsson, Lina; Eriksén, Sara; Borg, Christel</p> <p>2016-03-01</p> <p>This article describes healthcare managers' experiences of leading the implementation of video conferencing in discharge planning sessions as a new tool in everyday practice. Data collection took place through individual interviews and the interviews were analyzed using qualitative content analysis with an inductive approach. The results indicate that managers identified two distinct leadership perspectives when they reflected on the implementation process. They described a desired way of leading the implementation and communicating about the upcoming change, understanding and securing support for decisions, as well as ensuring that sufficient time is available throughout the change process. They also, however, described how they perceived that the implementation process was actually taking place, highlighting the lack of planning and preparation as well as the need for support and to be supportive, and having the courage to adopt and lead the implementation. It is suggested that managers at all levels require more information and training in how to encourage staff to become involved in designing their everyday work and in the implementation process. Managers, too, need ongoing organizational support for good leadership throughout the implementation of video conferencing in discharge planning sessions, including planning, start-up, implementation, and evaluation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19900012909','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19900012909"><span>Hybrid vision activities at NASA Johnson Space Center</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Juday, Richard D.</p> <p>1990-01-01</p> <p>NASA's Johnson Space Center in Houston, Texas, is active in several aspects of hybrid image processing. (The term hybrid image processing refers to a system that combines digital and photonic processing). The major thrusts are autonomous space operations such as planetary landing, servicing, and rendezvous and docking. By processing images in non-Cartesian geometries to achieve shift invariance to canonical distortions, researchers use certain aspects of the human visual system for machine vision. That technology flow is bidirectional; researchers are investigating the possible utility of video-rate coordinate transformations for human low-vision patients. Man-in-the-loop teleoperations are also supported by the use of video-rate image-coordinate transformations, as researchers plan to use bandwidth compression tailored to the varying spatial acuity of the human operator. Technological elements being developed in the program include upgraded spatial light modulators, real-time coordinate transformations in video imagery, synthetic filters that robustly allow estimation of object pose parameters, convolutionally blurred filters that have continuously selectable invariance to such image changes as magnification and rotation, and optimization of optical correlation done with spatial light modulators that have limited range and couple both phase and amplitude in their response.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25286349','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25286349"><span>Artificial vision support system (AVS(2)) for improved prosthetic vision.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Fink, Wolfgang; Tarbell, Mark A</p> <p>2014-11-01</p> <p>State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Video&pg=5&id=EJ1003558','ERIC'); return false;" href="https://eric.ed.gov/?q=Video&pg=5&id=EJ1003558"><span>What Do Teachers Think and Feel when Analyzing Videos of Themselves and Other Teachers Teaching?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Kleinknecht, Marc; Schneider, Jurgen</p> <p>2013-01-01</p> <p>Despite the widespread use of classroom videos in teacher professional development, little is known about the specific effects of various types of videos on teachers' cognitive, emotional, and motivational processes. This study investigates the processes experienced by 10 eighth-grade mathematics teachers while they analyzed videos of their own or…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19890012965','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19890012965"><span>Video requirements for materials processing experiments in the space station US laboratory</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Baugher, Charles R.</p> <p>1989-01-01</p> <p>Full utilization of the potential of the materials research on the Space Station can be achieved only if adequate means are available for interactive experimentation between the science facilities and ground-based investigators. Extensive video interfaces linking these three elements are the only alternative for establishing a viable relation. Because of the limit in the downlink capability, a comprehensive complement of on-board video processing, and video compression is needed. The application of video compression will be an absolute necessity since it's effectiveness will directly impact the quantity of data which will be available to ground investigator teams, and their ability to review the effects of process changes and the experiment progress. Video data compression utilization on the Space Station is discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24335350','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24335350"><span>Long-term relations among prosocial-media use, empathy, and prosocial behavior.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Prot, Sara; Gentile, Douglas A; Anderson, Craig A; Suzuki, Kanae; Swing, Edward; Lim, Kam Ming; Horiuchi, Yukiko; Jelic, Margareta; Krahé, Barbara; Liuqing, Wei; Liau, Albert K; Khoo, Angeline; Petrescu, Poesis Diana; Sakamoto, Akira; Tajima, Sachi; Toma, Roxana Andreea; Warburton, Wayne; Zhang, Xuemin; Lam, Ben Chun Pan</p> <p>2014-02-01</p> <p>Despite recent growth of research on the effects of prosocial media, processes underlying these effects are not well understood. Two studies explored theoretically relevant mediators and moderators of the effects of prosocial media on helping. Study 1 examined associations among prosocial- and violent-media use, empathy, and helping in samples from seven countries. Prosocial-media use was positively associated with helping. This effect was mediated by empathy and was similar across cultures. Study 2 explored longitudinal relations among prosocial-video-game use, violent-video-game use, empathy, and helping in a large sample of Singaporean children and adolescents measured three times across 2 years. Path analyses showed significant longitudinal effects of prosocial- and violent-video-game use on prosocial behavior through empathy. Latent-growth-curve modeling for the 2-year period revealed that change in video-game use significantly affected change in helping, and that this relationship was mediated by change in empathy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25530564','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25530564"><span>Modeling operators' emergency response time for chemical processing operations.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Murray, Susan L; Harputlu, Emrah; Mentzer, Ray A; Mannan, M Sam</p> <p>2014-01-01</p> <p>Operators have a crucial role during emergencies at a variety of facilities such as chemical processing plants. When an abnormality occurs in the production process, the operator often has limited time to either take corrective actions or evacuate before the situation becomes deadly. It is crucial that system designers and safety professionals can estimate the time required for a response before procedures and facilities are designed and operations are initiated. There are existing industrial engineering techniques to establish time standards for tasks performed at a normal working pace. However, it is reasonable to expect the time required to take action in emergency situations will be different than working at a normal production pace. It is possible that in an emergency, operators will act faster compared to a normal pace. It would be useful for system designers to be able to establish a time range for operators' response times for emergency situations. This article develops a modeling approach to estimate the time standard range for operators taking corrective actions or following evacuation procedures in emergency situations. This will aid engineers and managers in establishing time requirements for operators in emergency situations. The methodology used for this study combines a well-established industrial engineering technique for determining time requirements (predetermined time standard system) and adjustment coefficients for emergency situations developed by the authors. Numerous videos of workers performing well-established tasks at a maximum pace were studied. As an example, one of the tasks analyzed was pit crew workers changing tires as quickly as they could during a race. The operations in these videos were decomposed into basic, fundamental motions (such as walking, reaching for a tool, and bending over) by studying the videos frame by frame. A comparison analysis was then performed between the emergency pace and the normal working pace operations to determine performance coefficients. These coefficients represent the decrease in time required for various basic motions in emergency situations and were used to model an emergency response. This approach will make hazardous operations requiring operator response, alarm management, and evacuation processes easier to design and predict. An application of this methodology is included in the article. The time required for an emergency response was roughly a one-third faster than for a normal response time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.afdc.energy.gov/case/2586','SCIGOVWS'); return false;" href="https://www.afdc.energy.gov/case/2586"><span>Alternative Fuels Data Center: Hydrogen Drive</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.science.gov/aboutsearch.html">Science.gov Websites</a></p> <p></p> <p></p> <p>, contact Greater Washington Region Clean Cities Coalition. Download QuickTime <em>Video</em> QuickTime (.mov ) Download Windows Media <em>Video</em> Windows Media (.wmv) <em>Video</em> Download Help Text version See more videos provided</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=action+AND+motion&pg=5&id=EJ1014521','ERIC'); return false;" href="https://eric.ed.gov/?q=action+AND+motion&pg=5&id=EJ1014521"><span>Dynamic Simulation and Static Matching for Action Prediction: Evidence from Body Part Priming</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Springer, Anne; Brandstadter, Simone; Prinz, Wolfgang</p> <p>2013-01-01</p> <p>Accurately predicting other people's actions may involve two processes: internal real-time simulation (dynamic updating) and matching recently perceived action images (static matching). Using a priming of body parts, this study aimed to differentiate the two processes. Specifically, participants played a motion-controlled video game with…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22616028','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22616028"><span>Changes, disruption and innovation: An investigation of the introduction of new health information technology in a microbiology laboratory.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Toouli, George; Georgiou, Andrew; Westbrook, Johanna</p> <p>2012-01-01</p> <p>It is expected that health information technology (HIT) will deliver a safer, more efficient and effective health care system. The aim of this study was to undertake a qualitative and video-ethnographic examination of the impact of information technologies on work processes in the reception area of a Microbiology Department, to ascertain what changed, how it changed and the impact of the change. The setting for this study was the microbiology laboratory of a large tertiary hospital in Sydney. The study consisted of qualitative (interview and focus group) data and observation sessions for the period August 2005 to October 2006 along with video footage shot in three sessions covering the original system and the two stages of the Cerner implementation. Data analysis was assisted by NVivo software and process maps were produced from the video footage. There were two laboratory information systems observed in the video footage with computerized provider order entry introduced four months later. Process maps highlighted the large number of pre data entry steps with the original system whilst the newer system incorporated many of these steps in to the data entry stage. However, any time saved with the new system was offset by the requirement to complete some data entry of patient information not previously required. Other changes noted included the change of responsibilities for the reception staff and the physical changes required to accommodate the increased activity around the data entry area. Implementing a new HIT is always an exciting time for any environment but ensuring that the implementation goes smoothly and with minimal trouble requires the administrator and their team to plan well in advance for staff training, physical layout and possible staff resource reallocation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27729881','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27729881"><span>Exploring Self-regulation of More or Less Expert College-Age Video Game Players: A Sequential Explanatory Design.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yilmaz Soylu, Meryem; Bruning, Roger H</p> <p>2016-01-01</p> <p>This study examined differences in self-regulation among college-age expert, moderately expert, and non-expert video game players in playing video games for fun. Winne's model of self-regulation (Winne, 2001) guided the study. The main assumption of this study was that expert video game players used more processes of self-regulation than the less-expert players. We surveyed 143 college students about their game playing frequency, habits, and use of self-regulation. Data analysis indicated that while playing recreational video games, expert gamers self-regulated more than moderately expert and non-expert players and moderately expert players used more processes of self-regulation than non-experts. Semi-structured interviews also were conducted with selected participants at each of the expertise levels. Qualitative follow-up analyses revealed five themes: (1) characteristics of expert video gamers, (2) conditions for playing a video game, (3) figuring out a game, (4) how gamers act and, (5) game context. Overall, findings indicated that playing a video game is a highly self-regulated activity and that becoming an expert video game player mobilizes multiple sets of self-regulation related skills and processes. These findings are seen as promising for educators desiring to encourage student self-regulation, because they indicate the possibility of supporting students via recreational video games by recognizing that their play includes processes of self-regulation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5037963','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5037963"><span>Exploring Self-regulation of More or Less Expert College-Age Video Game Players: A Sequential Explanatory Design</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Yilmaz Soylu, Meryem; Bruning, Roger H.</p> <p>2016-01-01</p> <p>This study examined differences in self-regulation among college-age expert, moderately expert, and non-expert video game players in playing video games for fun. Winne's model of self-regulation (Winne, 2001) guided the study. The main assumption of this study was that expert video game players used more processes of self-regulation than the less-expert players. We surveyed 143 college students about their game playing frequency, habits, and use of self-regulation. Data analysis indicated that while playing recreational video games, expert gamers self-regulated more than moderately expert and non-expert players and moderately expert players used more processes of self-regulation than non-experts. Semi-structured interviews also were conducted with selected participants at each of the expertise levels. Qualitative follow-up analyses revealed five themes: (1) characteristics of expert video gamers, (2) conditions for playing a video game, (3) figuring out a game, (4) how gamers act and, (5) game context. Overall, findings indicated that playing a video game is a highly self-regulated activity and that becoming an expert video game player mobilizes multiple sets of self-regulation related skills and processes. These findings are seen as promising for educators desiring to encourage student self-regulation, because they indicate the possibility of supporting students via recreational video games by recognizing that their play includes processes of self-regulation. PMID:27729881</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015SPIE.9534E..17L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015SPIE.9534E..17L"><span>High dynamic range adaptive real-time smart camera: an overview of the HDR-ARTiST project</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique</p> <p>2015-04-01</p> <p>Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_14 --> <div id="page_15" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="281"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1330704','DOE-PATENT-XML'); return false;" href="https://www.osti.gov/servlets/purl/1330704"><span>Online coupled camera pose estimation and dense reconstruction from video</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Medioni, Gerard; Kang, Zhuoliang</p> <p>2016-11-01</p> <p>A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26321872','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26321872"><span>Classification of video sequences into chosen generalized use classes of target size and lighting level.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Leszczuk, Mikołaj; Dudek, Łukasz; Witkowski, Marcin</p> <p></p> <p>The VQiPS (Video Quality in Public Safety) Working Group, supported by the U.S. Department of Homeland Security, has been developing a user guide for public safety video applications. According to VQiPS, five parameters have particular importance influencing the ability to achieve a recognition task. They are: usage time-frame, discrimination level, target size, lighting level, and level of motion. These parameters form what are referred to as Generalized Use Classes (GUCs). The aim of our research was to develop algorithms that would automatically assist classification of input sequences into one of the GUCs. Target size and lighting level parameters were approached. The experiment described reveals the experts' ambiguity and hesitation during the manual target size determination process. However, the automatic methods developed for target size classification make it possible to determine GUC parameters with 70 % compliance to the end-users' opinion. Lighting levels of the entire sequence can be classified with an efficiency reaching 93 %. To make the algorithms available for use, a test application has been developed. It is able to process video files and display classification results, the user interface being very simple and requiring only minimal user interaction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4987000','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4987000"><span>An educational video game for nutrition of young people: Theory and design</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Ledoux, Tracey; Griffith, Melissa; Thompson, Debbe; Nguyen, Nga; Watson, Kathy; Baranowski, Janice; Buday, Richard; Abdelsamad, Dina; Baranowski, Tom</p> <p>2016-01-01</p> <p>Background Playing Escape from DIAB (DIAB) and Nanoswarm (NANO), epic video game adventures, increased fruit and vegetable consumption among a multi-ethnic sample of 10–12 year old children during pilot testing. Key elements of both games were educational mini-games embedded in the overall game that promoted knowledge acquisition regarding diet, physical activity and energy balance. 95–100% of participants demonstrated mastery of these mini-games suggesting knowledge acquisition. Aim This article describes the process of designing and developing the educational mini-games. A secondary purpose was to explore the experience of children while playing the games. Method The educational games were based on Social Cognitive and Mastery Learning Theories. A multidisciplinary team of behavioral nutrition, PA, and video game experts designed, developed, and tested the mini-games. Results Alpha testing revealed children generally liked the mini-games and found them to be reasonably challenging. Process evaluation data from pilot testing revealed almost all participants completed nearly all educational mini-games in a reasonable amount of time suggesting feasibility of this approach. Conclusions Future research should continue to explore the use of video games in educating children to achieve healthy behavior changes. PMID:27547019</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29751625','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29751625"><span>Analysis of Soot Propensity in Combustion Processes Using Optical Sensors and Video Magnification.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Garcés, Hugo O; Fuentes, Andrés; Reszka, Pedro; Carvajal, Gonzalo</p> <p>2018-05-11</p> <p>Industrial combustion processes are an important source of particulate matter, causing significant pollution problems that affect human health, and are a major contributor to global warming. The most common method for analyzing the soot emission propensity in flames is the Smoke Point Height (SPH) analysis, which relates the fuel flow rate to a critical flame height at which soot particles begin to leave the reactive zone through the tip of the flame. The SPH and is marked by morphological changes on the flame tip. SPH analysis is normally done through flame observations with the naked eye, leading to high bias. Other techniques are more accurate, but are not practical to implement in industrial settings, such as the Line Of Sight Attenuation (LOSA), which obtains soot volume fractions within the flame from the attenuation of a laser beam. We propose the use of Video Magnification techniques to detect the flame morphological changes and thus determine the SPH minimizing observation bias. We have applied for the first time Eulerian Video Magnification (EVM) and Phase-based Video Magnification (PVM) on an ethylene laminar diffusion flame. The results were compared with LOSA measurements, and indicate that EVM is the most accurate method for SPH determination.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27547019','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27547019"><span>An educational video game for nutrition of young people: Theory and design.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ledoux, Tracey; Griffith, Melissa; Thompson, Debbe; Nguyen, Nga; Watson, Kathy; Baranowski, Janice; Buday, Richard; Abdelsamad, Dina; Baranowski, Tom</p> <p>2016-08-01</p> <p>Playing Escape from DIAB (DIAB) and Nanoswarm (NANO) , epic video game adventures, increased fruit and vegetable consumption among a multi-ethnic sample of 10-12 year old children during pilot testing. Key elements of both games were educational mini-games embedded in the overall game that promoted knowledge acquisition regarding diet, physical activity and energy balance. 95-100% of participants demonstrated mastery of these mini-games suggesting knowledge acquisition. This article describes the process of designing and developing the educational mini-games. A secondary purpose was to explore the experience of children while playing the games. The educational games were based on Social Cognitive and Mastery Learning Theories. A multidisciplinary team of behavioral nutrition, PA, and video game experts designed, developed, and tested the mini-games. Alpha testing revealed children generally liked the mini-games and found them to be reasonably challenging. Process evaluation data from pilot testing revealed almost all participants completed nearly all educational mini-games in a reasonable amount of time suggesting feasibility of this approach. Future research should continue to explore the use of video games in educating children to achieve healthy behavior changes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015SPIE.9649E..0RK','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015SPIE.9649E..0RK"><span>Compact full-motion video hyperspectral cameras: development, image processing, and applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kanaev, A. V.</p> <p>2015-10-01</p> <p>Emergence of spectral pixel-level color filters has enabled development of hyper-spectral Full Motion Video (FMV) sensors operating in visible (EO) and infrared (IR) wavelengths. The new class of hyper-spectral cameras opens broad possibilities of its utilization for military and industry purposes. Indeed, such cameras are able to classify materials as well as detect and track spectral signatures continuously in real time while simultaneously providing an operator the benefit of enhanced-discrimination-color video. Supporting these extensive capabilities requires significant computational processing of the collected spectral data. In general, two processing streams are envisioned for mosaic array cameras. The first is spectral computation that provides essential spectral content analysis e.g. detection or classification. The second is presentation of the video to an operator that can offer the best display of the content depending on the performed task e.g. providing spatial resolution enhancement or color coding of the spectral analysis. These processing streams can be executed in parallel or they can utilize each other's results. The spectral analysis algorithms have been developed extensively, however demosaicking of more than three equally-sampled spectral bands has been explored scarcely. We present unique approach to demosaicking based on multi-band super-resolution and show the trade-off between spatial resolution and spectral content. Using imagery collected with developed 9-band SWIR camera we demonstrate several of its concepts of operation including detection and tracking. We also compare the demosaicking results to the results of multi-frame super-resolution as well as to the combined multi-frame and multiband processing.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1998APS..APR.H2405G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1998APS..APR.H2405G"><span>Apparatus for Investigating Momentum and Energy Conservation With MBL and Video Analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>George, Elizabeth; Vazquez-Abad, Jesus</p> <p>1998-04-01</p> <p>We describe the development and use of a laboratory setup that is appropriate for computer-aided student investigation of the principles of conservation of momentum and mechanical energy in collisions. The setup consists of two colliding carts on a low-friction track, with one of the carts (the target) attached to a spring, whose extension or compression takes the place of the pendulum's rise in the traditional ballistic pendulum apparatus. Position vs. time data for each cart are acquired either by using two motion sensors or by digitizing images obtained with a video camera. This setup allows students to examine the time history of momentum and mechanical energy during the entire collision process, rather than simply focusing on the before and after regions. We believe that this setup is suitable for helping students gain understanding as the processes involved are simple to follow visually, to manipulate, and to analyze.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2001SPIE.4520...63H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2001SPIE.4520...63H"><span>Immersive video for virtual tourism</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hernandez, Luis A.; Taibo, Javier; Seoane, Antonio J.</p> <p>2001-11-01</p> <p>This paper describes a new panoramic, 360 degree(s) video system and its use in a real application for virtual tourism. The development of this system has required to design new hardware for multi-camera recording, and software for video processing in order to elaborate the panorama frames and to playback the resulting high resolution video footage on a regular PC. The system makes use of new VR display hardware, such as WindowVR, in order to make the view dependent on the viewer's spatial orientation and so enhance immersiveness. There are very few examples of similar technologies and the existing ones are extremely expensive and/or impossible to be implemented on personal computers with acceptable quality. The idea of the system starts from the concept of Panorama picture, developed in technologies such as QuickTimeVR. This idea is extended to the concept of panorama frame that leads to panorama video. However, many problems are to be solved to implement this simple scheme. Data acquisition involves simultaneously footage recording in every direction, and latter processing to convert every set of frames in a single high resolution panorama frame. Since there is no common hardware capable of 4096x512 video playback at 25 fps rate, it must be stripped in smaller pieces which the system must manage to get the right frames of the right parts as the user movement demands it. As the system must be immersive, the physical interface to watch the 360 degree(s) video is a WindowVR, that is, a flat screen with an orientation tracker that the user holds in his hands, moving it like if it were a virtual window through which the city and its activity is being shown.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009homf.book..447H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009homf.book..447H"><span>Video Browsing on Handheld Devices</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hürst, Wolfgang</p> <p></p> <p>Recent improvements in processing power, storage space, and video codec development enable users now to playback video on their handheld devices in a reasonable quality. However, given the form factor restrictions of such a mobile device, screen size still remains a natural limit and - as the term "handheld" implies - always will be a critical resource. This is not only true for video but any data that is processed on such devices. For this reason, developers have come up with new and innovative ways to deal with large documents in such limited scenarios. For example, if you look at the iPhone, innovative techniques such as flicking have been introduced to skim large lists of text (e.g. hundreds of entries in your music collection). Automatically adapting the zoom level to, for example, the width of table cells when double tapping on the screen enables reasonable browsing of web pages that have originally been designed for large, desktop PC sized screens. A multi touch interface allows you to easily zoom in and out of large text documents and images using two fingers. In the next section, we will illustrate that advanced techniques to browse large video files have been developed in the past years, as well. However, if you look at state-of-the-art video players on mobile devices, normally just simple, VCR like controls are supported (at least at the time of this writing) that only allow users to just start, stop, and pause video playback. If supported at all, browsing and navigation functionality is often restricted to simple skipping of chapters via two single buttons for backward and forward navigation and a small and thus not very sensitive timeline slider.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1995SPIE.2451...41G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1995SPIE.2451...41G"><span>MPEG-1 low-cost encoder solution</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Grueger, Klaus; Schirrmeister, Frank; Filor, Lutz; von Reventlow, Christian; Schneider, Ulrich; Mueller, Gerriet; Sefzik, Nicolai; Fiedrich, Sven</p> <p>1995-02-01</p> <p>A solution for real-time compression of digital YCRCB video data to an MPEG-1 video data stream has been developed. As an additional option, motion JPEG and video telephone streams (H.261) can be generated. For MPEG-1, up to two bidirectional predicted images are supported. The required computational power for motion estimation and DCT/IDCT, memory size and memory bandwidth have been the main challenges. The design uses fast-page-mode memory accesses and requires only one single 80 ns EDO-DRAM with 256 X 16 organization for video encoding. This can be achieved only by using adequate access and coding strategies. The architecture consists of an input processing and filter unit, a memory interface, a motion estimation unit, a motion compensation unit, a DCT unit, a quantization control, a VLC unit and a bus interface. For using the available memory bandwidth by the processing tasks, a fixed schedule for memory accesses has been applied, that can be interrupted for asynchronous events. The motion estimation unit implements a highly sophisticated hierarchical search strategy based on block matching. The DCT unit uses a separated fast-DCT flowgraph realized by a switchable hardware unit for both DCT and IDCT operation. By appropriate multiplexing, only one multiplier is required for: DCT, quantization, inverse quantization, and IDCT. The VLC unit generates the video-stream up to the video sequence layer and is directly coupled with an intelligent bus-interface. Thus, the assembly of video, audio and system data can easily be performed by the host computer. Having a relatively low complexity and only small requirements for DRAM circuits, the developed solution can be applied to low-cost encoding products for consumer electronics.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4424474','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4424474"><span>Intra and Inter-Rater Reliability of Screening for Movement Impairments: Movement Control Tests from The Foundation Matrix</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Mischiati, Carolina R.; Comerford, Mark; Gosford, Emma; Swart, Jacqueline; Ewings, Sean; Botha, Nadine; Stokes, Maria; Mottram, Sarah L.</p> <p>2015-01-01</p> <p>Pre-season screening is well established within the sporting arena, and aims to enhance performance and reduce injury risk. With the increasing need to identify potential injury with greater accuracy, a new risk assessment process has been produced; The Performance Matrix (battery of movement control tests). As with any new method of objective testing, it is fundamental to establish whether the same results can be reproduced between examiners and by the same examiner on consecutive occasions. This study aimed to determine the intra-rater test re-test and inter-rater reliability of tests from a component of The Performance Matrix, The Foundation Matrix. Twenty participants were screened by two experienced musculoskeletal therapists using nine tests to assess the ability to control movement during specific tasks. Movement evaluation criteria for each test were rated as pass or fail. The therapists observed participants real-time and tests were recorded on video to enable repeated ratings four months later to examine intra-rater reliability (videos rated two weeks apart). Overall test percentage agreement was 87% for inter-rater reliability; 98% Rater 1, 94% Rater 2 for test re-test reliability; and 75% for real-time versus video. Intraclass-correlation coefficients (ICCs) were excellent between raters (0.81) and within raters (Rater 1, 0.96; Rater 2, 0.88) but poor for real-time versus video (0.23). Reliability for individual components of each test was more variable: inter-rater, 68-100%; intra-rater, 88-100% Rater 1, 75-100% Rater 2; and real-time versus video 31-100%. Cohen’s Kappa values for inter-rater reliability were 0.0-1.0; intra-rater 0.6-1.0 for Rater 1; -0.1-1.0 for Rater 2; and -0.1-1 for real-time versus video. It is concluded that both inter and intra-rater reliability of tests in The Foundation Matrix are acceptable when rated by experienced therapists. Recommendations are made for modifying some of the criteria to improve reliability where excellence was not reached. Key points The movement control tests of The Foundation Matrix had acceptable reliability between raters and within raters on different days Agreement between observations made on tests performed real-time and on video recordings was low, indicating poor validity of use of video recordings Some movement evaluation criteria related to specific tests that did not achieve excellent agreement could be modified to improve reliability PMID:25983594</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=process+AND+learning&id=EJ1089574','ERIC'); return false;" href="https://eric.ed.gov/?q=process+AND+learning&id=EJ1089574"><span>Learning Process and Learning Outcomes of Video Podcasts Including the Instructor and PPT Slides: A Chinese Case</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Pi, Zhongling; Hong, Jianzhong</p> <p>2016-01-01</p> <p>Video podcasts have become one of the fastest developing trends in learning and teaching. The study explored the effect of the presenting mode of educational video podcasts on the learning process and learning outcomes. Prior to viewing a video podcast, the 94 Chinese undergraduates participating in the study completed a demographic questionnaire…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26625860','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26625860"><span>Audio-visual aid in teaching "fatty liver".</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dash, Sambit; Kamath, Ullas; Rao, Guruprasad; Prakash, Jay; Mishra, Snigdha</p> <p>2016-05-06</p> <p>Use of audio visual tools to aid in medical education is ever on a rise. Our study intends to find the efficacy of a video prepared on "fatty liver," a topic that is often a challenge for pre-clinical teachers, in enhancing cognitive processing and ultimately learning. We prepared a video presentation of 11:36 min, incorporating various concepts of the topic, while keeping in view Mayer's and Ellaway guidelines for multimedia presentation. A pre-post test study on subject knowledge was conducted for 100 students with the video shown as intervention. A retrospective pre study was conducted as a survey which inquired about students understanding of the key concepts of the topic and a feedback on our video was taken. Students performed significantly better in the post test (mean score 8.52 vs. 5.45 in pre-test), positively responded in the retrospective pre-test and gave a positive feedback for our video presentation. Well-designed multimedia tools can aid in cognitive processing and enhance working memory capacity as shown in our study. In times when "smart" device penetration is high, information and communication tools in medical education, which can act as essential aid and not as replacement for traditional curriculums, can be beneficial to the students. © 2015 by The International Union of Biochemistry and Molecular Biology, 44:241-245, 2016. © 2015 The International Union of Biochemistry and Molecular Biology.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20020080274','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20020080274"><span>Getting the Bigger Picture With Digital Surveillance</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p></p> <p>2002-01-01</p> <p>Through a Space Act Agreement, Diebold, Inc., acquired the exclusive rights to Glenn Research Center's patented video observation technology, originally designed to accelerate video image analysis for various ongoing and future space applications. Diebold implemented the technology into its AccuTrack digital, color video recorder, a state-of- the-art surveillance product that uses motion detection for around-the- clock monitoring. AccuTrack captures digitally signed images and transaction data in real-time. This process replaces the onerous tasks involved in operating a VCR-based surveillance system, and subsequently eliminates the need for central viewing and tape archiving locations altogether. AccuTrack can monitor an entire bank facility, including four automated teller machines, multiple teller lines, and new account areas, all from one central location.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28614353','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28614353"><span>As time passes by: Observed motion-speed and psychological time during video playback.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nyman, Thomas Jonathan; Karlsson, Eric Per Anders; Antfolk, Jan</p> <p>2017-01-01</p> <p>Research shows that psychological time (i.e., the subjective experience and assessment of the passage of time) is malleable and that the central nervous system re-calibrates temporal information in accordance with situational factors so that psychological time flows slower or faster. Observed motion-speed (e.g., the visual perception of a rolling ball) is an important situational factor which influences the production of time estimates. The present study examines previous findings showing that observed slow and fast motion-speed during video playback respectively results in over- and underproductions of intervals of time. Here, we investigated through three separate experiments: a) the main effect of observed motion-speed during video playback on a time production task and b) the interactive effect of the frame rate (frames per second; fps) and motion-speed during video playback on a time production task. No main effect of video playback-speed or interactive effect between video playback-speed and frame rate was found on time production.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5470665','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5470665"><span>As time passes by: Observed motion-speed and psychological time during video playback</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Karlsson, Eric Per Anders; Antfolk, Jan</p> <p>2017-01-01</p> <p>Research shows that psychological time (i.e., the subjective experience and assessment of the passage of time) is malleable and that the central nervous system re-calibrates temporal information in accordance with situational factors so that psychological time flows slower or faster. Observed motion-speed (e.g., the visual perception of a rolling ball) is an important situational factor which influences the production of time estimates. The present study examines previous findings showing that observed slow and fast motion-speed during video playback respectively results in over- and underproductions of intervals of time. Here, we investigated through three separate experiments: a) the main effect of observed motion-speed during video playback on a time production task and b) the interactive effect of the frame rate (frames per second; fps) and motion-speed during video playback on a time production task. No main effect of video playback-speed or interactive effect between video playback-speed and frame rate was found on time production. PMID:28614353</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4208216','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4208216"><span>Mobile-Cloud Assisted Video Summarization Framework for Efficient Management of Remote Sensing Data Generated by Wireless Capsule Sensors</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Mehmood, Irfan; Sajjad, Muhammad; Baik, Sung Wook</p> <p>2014-01-01</p> <p>Wireless capsule endoscopy (WCE) has great advantages over traditional endoscopy because it is portable and easy to use, especially in remote monitoring health-services. However, during the WCE process, the large amount of captured video data demands a significant deal of computation to analyze and retrieve informative video frames. In order to facilitate efficient WCE data collection and browsing task, we present a resource- and bandwidth-aware WCE video summarization framework that extracts the representative keyframes of the WCE video contents by removing redundant and non-informative frames. For redundancy elimination, we use Jeffrey-divergence between color histograms and inter-frame Boolean series-based correlation of color channels. To remove non-informative frames, multi-fractal texture features are extracted to assist the classification using an ensemble-based classifier. Owing to the limited WCE resources, it is impossible for the WCE system to perform computationally intensive video summarization tasks. To resolve computational challenges, mobile-cloud architecture is incorporated, which provides resizable computing capacities by adaptively offloading video summarization tasks between the client and the cloud server. The qualitative and quantitative results are encouraging and show that the proposed framework saves information transmission cost and bandwidth, as well as the valuable time of data analysts in browsing remote sensing data. PMID:25225874</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23670014','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23670014"><span>Fast generation of video holograms of three-dimensional moving objects using a motion compensation-based novel look-up table.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kim, Seung-Cheol; Dong, Xiao-Bin; Kwon, Min-Woo; Kim, Eun-Soo</p> <p>2013-05-06</p> <p>A novel approach for fast generation of video holograms of three-dimensional (3-D) moving objects using a motion compensation-based novel-look-up-table (MC-N-LUT) method is proposed. Motion compensation has been widely employed in compression of conventional 2-D video data because of its ability to exploit high temporal correlation between successive video frames. Here, this concept of motion-compensation is firstly applied to the N-LUT based on its inherent property of shift-invariance. That is, motion vectors of 3-D moving objects are extracted between the two consecutive video frames, and with them motions of the 3-D objects at each frame are compensated. Then, through this process, 3-D object data to be calculated for its video holograms are massively reduced, which results in a dramatic increase of the computational speed of the proposed method. Experimental results with three kinds of 3-D video scenarios reveal that the average number of calculated object points and the average calculation time for one object point of the proposed method, have found to be reduced down to 86.95%, 86.53% and 34.99%, 32.30%, respectively compared to those of the conventional N-LUT and temporal redundancy-based N-LUT (TR-N-LUT) methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25225874','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25225874"><span>Mobile-cloud assisted video summarization framework for efficient management of remote sensing data generated by wireless capsule sensors.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mehmood, Irfan; Sajjad, Muhammad; Baik, Sung Wook</p> <p>2014-09-15</p> <p>Wireless capsule endoscopy (WCE) has great advantages over traditional endoscopy because it is portable and easy to use, especially in remote monitoring health-services. However, during the WCE process, the large amount of captured video data demands a significant deal of computation to analyze and retrieve informative video frames. In order to facilitate efficient WCE data collection and browsing task, we present a resource- and bandwidth-aware WCE video summarization framework that extracts the representative keyframes of the WCE video contents by removing redundant and non-informative frames. For redundancy elimination, we use Jeffrey-divergence between color histograms and inter-frame Boolean series-based correlation of color channels. To remove non-informative frames, multi-fractal texture features are extracted to assist the classification using an ensemble-based classifier. Owing to the limited WCE resources, it is impossible for the WCE system to perform computationally intensive video summarization tasks. To resolve computational challenges, mobile-cloud architecture is incorporated, which provides resizable computing capacities by adaptively offloading video summarization tasks between the client and the cloud server. The qualitative and quantitative results are encouraging and show that the proposed framework saves information transmission cost and bandwidth, as well as the valuable time of data analysts in browsing remote sensing data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4743914','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4743914"><span>Evaluating Cell Processes, Quality, and Biomarkers in Pluripotent Stem Cells Using Video Bioinformatics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Lin, Sabrina C.; Bays, Brett C.; Omaiye, Esther; Bhanu, Bir; Talbot, Prue</p> <p>2016-01-01</p> <p>There is a foundational need for quality control tools in stem cell laboratories engaged in basic research, regenerative therapies, and toxicological studies. These tools require automated methods for evaluating cell processes and quality during in vitro passaging, expansion, maintenance, and differentiation. In this paper, an unbiased, automated high-content profiling toolkit, StemCellQC, is presented that non-invasively extracts information on cell quality and cellular processes from time-lapse phase-contrast videos. Twenty four (24) morphological and dynamic features were analyzed in healthy, unhealthy, and dying human embryonic stem cell (hESC) colonies to identify those features that were affected in each group. Multiple features differed in the healthy versus unhealthy/dying groups, and these features were linked to growth, motility, and death. Biomarkers were discovered that predicted cell processes before they were detectable by manual observation. StemCellQC distinguished healthy and unhealthy/dying hESC colonies with 96% accuracy by non-invasively measuring and tracking dynamic and morphological features over 48 hours. Changes in cellular processes can be monitored by StemCellQC and predictions can be made about the quality of pluripotent stem cell colonies. This toolkit reduced the time and resources required to track multiple pluripotent stem cell colonies and eliminated handling errors and false classifications due to human bias. StemCellQC provided both user-specified and classifier-determined analysis in cases where the affected features are not intuitive or anticipated. Video analysis algorithms allowed assessment of biological phenomena using automatic detection analysis, which can aid facilities where maintaining stem cell quality and/or monitoring changes in cellular processes are essential. In the future StemCellQC can be expanded to include other features, cell types, treatments, and differentiating cells. PMID:26848582</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_15 --> <div id="page_16" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="301"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26848582','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26848582"><span>Evaluating Cell Processes, Quality, and Biomarkers in Pluripotent Stem Cells Using Video Bioinformatics.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zahedi, Atena; On, Vincent; Lin, Sabrina C; Bays, Brett C; Omaiye, Esther; Bhanu, Bir; Talbot, Prue</p> <p>2016-01-01</p> <p>There is a foundational need for quality control tools in stem cell laboratories engaged in basic research, regenerative therapies, and toxicological studies. These tools require automated methods for evaluating cell processes and quality during in vitro passaging, expansion, maintenance, and differentiation. In this paper, an unbiased, automated high-content profiling toolkit, StemCellQC, is presented that non-invasively extracts information on cell quality and cellular processes from time-lapse phase-contrast videos. Twenty four (24) morphological and dynamic features were analyzed in healthy, unhealthy, and dying human embryonic stem cell (hESC) colonies to identify those features that were affected in each group. Multiple features differed in the healthy versus unhealthy/dying groups, and these features were linked to growth, motility, and death. Biomarkers were discovered that predicted cell processes before they were detectable by manual observation. StemCellQC distinguished healthy and unhealthy/dying hESC colonies with 96% accuracy by non-invasively measuring and tracking dynamic and morphological features over 48 hours. Changes in cellular processes can be monitored by StemCellQC and predictions can be made about the quality of pluripotent stem cell colonies. This toolkit reduced the time and resources required to track multiple pluripotent stem cell colonies and eliminated handling errors and false classifications due to human bias. StemCellQC provided both user-specified and classifier-determined analysis in cases where the affected features are not intuitive or anticipated. Video analysis algorithms allowed assessment of biological phenomena using automatic detection analysis, which can aid facilities where maintaining stem cell quality and/or monitoring changes in cellular processes are essential. In the future StemCellQC can be expanded to include other features, cell types, treatments, and differentiating cells.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19870007440','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19870007440"><span>Initial utilization of the CVIRB video production facility</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Parrish, Russell V.; Busquets, Anthony M.; Hogge, Thomas W.</p> <p>1987-01-01</p> <p>Video disk technology is one of the central themes of a technology demonstrator workstation being assembled as a man/machine interface for the Space Station Data Management Test Bed at Johnson Space Center. Langley Research Center personnel involved in the conception and implementation of this workstation have assembled a video production facility to allow production of video disk material for this propose. This paper documents the initial familiarization efforts in the field of video production for those personnel and that facility. Although the entire video disk production cycle was not operational for this initial effort, the production of a simulated disk on video tape did acquaint the personnel with the processes involved and with the operation of the hardware. Invaluable experience in storyboarding, script writing, audio and video recording, and audio and video editing was gained in the production process.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2012-title20-vol2/pdf/CFR-2012-title20-vol2-sec404-936.pdf','CFR2012'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2012-title20-vol2/pdf/CFR-2012-title20-vol2-sec404-936.pdf"><span>20 CFR 404.936 - Time and place for a hearing before an administrative law judge.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2012&page.go=Go">Code of Federal Regulations, 2012 CFR</a></p> <p></p> <p>2012-04-01</p> <p>... 20 Employees' Benefits 2 2012-04-01 2012-04-01 false Time and place for a hearing before an...-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Determinations, Administrative Review Process, and... teleconferencing technology is available to conduct the appearance, use of video teleconferencing to conduct the...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2011-title20-vol2/pdf/CFR-2011-title20-vol2-sec404-936.pdf','CFR2011'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2011-title20-vol2/pdf/CFR-2011-title20-vol2-sec404-936.pdf"><span>20 CFR 404.936 - Time and place for a hearing before an administrative law judge.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2011&page.go=Go">Code of Federal Regulations, 2011 CFR</a></p> <p></p> <p>2011-04-01</p> <p>... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Time and place for a hearing before an...-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Determinations, Administrative Review Process, and... teleconferencing technology is available to conduct the appearance, use of video teleconferencing to conduct the...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2013-title20-vol2/pdf/CFR-2013-title20-vol2-sec404-936.pdf','CFR2013'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2013-title20-vol2/pdf/CFR-2013-title20-vol2-sec404-936.pdf"><span>20 CFR 404.936 - Time and place for a hearing before an administrative law judge.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2013&page.go=Go">Code of Federal Regulations, 2013 CFR</a></p> <p></p> <p>2013-04-01</p> <p>... 20 Employees' Benefits 2 2013-04-01 2013-04-01 false Time and place for a hearing before an...-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Determinations, Administrative Review Process, and... teleconferencing technology is available to conduct the appearance, use of video teleconferencing to conduct the...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22585009','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22585009"><span>Unprocessed real-time imaging of vitreoretinal surgical maneuvers using a microscope-integrated spectral-domain optical coherence tomography system.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hahn, Paul; Migacz, Justin; O'Connell, Rachelle; Izatt, Joseph A; Toth, Cynthia A</p> <p>2013-01-01</p> <p>We have recently developed a microscope-integrated spectral-domain optical coherence tomography (MIOCT) device towards intrasurgical cross-sectional imaging of surgical maneuvers. In this report, we explore the capability of MIOCT to acquire real-time video imaging of vitreoretinal surgical maneuvers without post-processing modifications. Standard 3-port vitrectomy was performed in human during scheduled surgery as well as in cadaveric porcine eyes. MIOCT imaging of human subjects was performed in healthy normal volunteers and intraoperatively at a normal pause immediately following surgical manipulations, under an Institutional Review Board-approved protocol, with informed consent from all subjects. Video MIOCT imaging of live surgical manipulations was performed in cadaveric porcine eyes by carefully aligning B-scans with instrument orientation and movement. Inverted imaging was performed by lengthening of the reference arm to a position beyond the choroid. Unprocessed MIOCT imaging was successfully obtained in healthy human volunteers and in human patients undergoing surgery, with visualization of post-surgical changes in unprocessed single B-scans. Real-time, unprocessed MIOCT video imaging was successfully obtained in cadaveric porcine eyes during brushing of the retina with the Tano scraper, peeling of superficial retinal tissue with intraocular forceps, and separation of the posterior hyaloid face. Real-time inverted imaging enabled imaging without complex conjugate artifacts. MIOCT is capable of unprocessed imaging of the macula in human patients undergoing surgery and of unprocessed, real-time, video imaging of surgical maneuvers in model eyes. These capabilities represent an important step towards development of MIOCT for efficient, real-time imaging of manipulations during human surgery.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009SPIE.7443E..0IB','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009SPIE.7443E..0IB"><span>Design considerations for computationally constrained two-way real-time video communication</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bivolarski, Lazar M.; Saunders, Steven E.; Ralston, John D.</p> <p>2009-08-01</p> <p>Today's video codecs have evolved primarily to meet the requirements of the motion picture and broadcast industries, where high-complexity studio encoding can be utilized to create highly-compressed master copies that are then broadcast one-way for playback using less-expensive, lower-complexity consumer devices for decoding and playback. Related standards activities have largely ignored the computational complexity and bandwidth constraints of wireless or Internet based real-time video communications using devices such as cell phones or webcams. Telecommunications industry efforts to develop and standardize video codecs for applications such as video telephony and video conferencing have not yielded image size, quality, and frame-rate performance that match today's consumer expectations and market requirements for Internet and mobile video services. This paper reviews the constraints and the corresponding video codec requirements imposed by real-time, 2-way mobile video applications. Several promising elements of a new mobile video codec architecture are identified, and more comprehensive computational complexity metrics and video quality metrics are proposed in order to support the design, testing, and standardization of these new mobile video codecs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1000830-video-analytics-indexing-summarization-searching-video-archives','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1000830-video-analytics-indexing-summarization-searching-video-archives"><span>Video Analytics for Indexing, Summarization and Searching of Video Archives</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Trease, Harold E.; Trease, Lynn L.</p> <p></p> <p>This paper will be submitted to the proceedings The Eleventh IASTED International Conference on. Signal and Image Processing. Given a video or video archive how does one effectively and quickly summarize, classify, and search the information contained within the data? This paper addresses these issues by describing a process for the automated generation of a table-of-contents and keyword, topic-based index tables that can be used to catalogue, summarize, and search large amounts of video data. Having the ability to index and search the information contained within the videos, beyond just metadata tags, provides a mechanism to extract and identify "useful"more » content from image and video data.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26483717','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26483717"><span>Issues and advances in research methods on video games and cognitive abilities.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sobczyk, Bart; Dobrowolski, Paweł; Skorko, Maciek; Michalak, Jakub; Brzezicka, Aneta</p> <p>2015-01-01</p> <p>The impact of video game playing on cognitive abilities has been the focus of numerous studies over the last 10 years. Some cross-sectional comparisons indicate the cognitive advantages of video game players (VGPs) over non-players (NVGPs) and the benefits of video game trainings, while others fail to replicate these findings. Though there is an ongoing discussion over methodological practices and their impact on observable effects, some elementary issues, such as the representativeness of recruited VGP groups and lack of genre differentiation have not yet been widely addressed. In this article we present objective and declarative gameplay time data gathered from large samples in order to illustrate how playtime is distributed over VGP populations. The implications of this data are then discussed in the context of previous studies in the field. We also argue in favor of differentiating video games based on their genre when recruiting study samples, as this form of classification reflects the core mechanics that they utilize and therefore provides a measure of insight into what cognitive functions are likely to be engaged most. Additionally, we present the Covert Video Game Experience Questionnaire as an example of how this sort of classification can be applied during the recruitment process.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005SPIE.6001..160M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005SPIE.6001..160M"><span>Wavelet based mobile video watermarking: spread spectrum vs. informed embedding</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mitrea, M.; Prêteux, F.; Duţă, S.; Petrescu, M.</p> <p>2005-11-01</p> <p>The cell phone expansion provides an additional direction for digital video content distribution: music clips, news, sport events are more and more transmitted toward mobile users. Consequently, from the watermarking point of view, a new challenge should be taken: very low bitrate contents (e.g. as low as 64 kbit/s) are now to be protected. Within this framework, the paper approaches for the first time the mathematical models for two random processes, namely the original video to be protected and a very harmful attack any watermarking method should face the StirMark attack. By applying an advanced statistical investigation (combining the Chi square, Ro, Fisher and Student tests) in the discrete wavelet domain, it is established that the popular Gaussian assumption can be very restrictively used when describing the former process and has nothing to do with the latter. As these results can a priori determine the performances of several watermarking methods, both of spread spectrum and informed embedding types, they should be considered in the design stage.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JEI....25f3007P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JEI....25f3007P"><span>Efficient biprediction decision scheme for fast high efficiency video coding encoding</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Park, Sang-hyo; Lee, Seung-ho; Jang, Euee S.; Jun, Dongsan; Kang, Jung-Won</p> <p>2016-11-01</p> <p>An efficient biprediction decision scheme of high efficiency video coding (HEVC) is proposed for fast-encoding applications. For low-delay video applications, bidirectional prediction can be used to increase compression performance efficiently with previous reference frames. However, at the same time, the computational complexity of the HEVC encoder is significantly increased due to the additional biprediction search. Although a some research has attempted to reduce this complexity, whether the prediction is strongly related to both motion complexity and prediction modes in a coding unit has not yet been investigated. A method that avoids most compression-inefficient search points is proposed so that the computational complexity of the motion estimation process can be dramatically decreased. To determine if biprediction is critical, the proposed method exploits the stochastic correlation of the context of prediction units (PUs): the direction of a PU and the accuracy of a motion vector. Through experimental results, the proposed method showed that the time complexity of biprediction can be reduced to 30% on average, outperforming existing methods in view of encoding time, number of function calls, and memory access.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25836082','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25836082"><span>Object tracking mask-based NLUT on GPUs for real-time generation of holographic videos of three-dimensional scenes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kwon, M-W; Kim, S-C; Yoon, S-E; Ho, Y-S; Kim, E-S</p> <p>2015-02-09</p> <p>A new object tracking mask-based novel-look-up-table (OTM-NLUT) method is proposed and implemented on graphics-processing-units (GPUs) for real-time generation of holographic videos of three-dimensional (3-D) scenes. Since the proposed method is designed to be matched with software and memory structures of the GPU, the number of compute-unified-device-architecture (CUDA) kernel function calls and the computer-generated hologram (CGH) buffer size of the proposed method have been significantly reduced. It therefore results in a great increase of the computational speed of the proposed method and enables real-time generation of CGH patterns of 3-D scenes. Experimental results show that the proposed method can generate 31.1 frames of Fresnel CGH patterns with 1,920 × 1,080 pixels per second, on average, for three test 3-D video scenarios with 12,666 object points on three GPU boards of NVIDIA GTX TITAN, and confirm the feasibility of the proposed method in the practical application of electro-holographic 3-D displays.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22040315','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22040315"><span>A longitudinal study of the association between violent video game play and aggression among adolescents.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Willoughby, Teena; Adachi, Paul J C; Good, Marie</p> <p>2012-07-01</p> <p>In the past 2 decades, correlational and experimental studies have found a positive association between violent video game play and aggression. There is less evidence, however, to support a long-term relation between these behaviors. This study examined sustained violent video game play and adolescent aggressive behavior across the high school years and directly assessed the socialization (violent video game play predicts aggression over time) versus selection hypotheses (aggression predicts violent video game play over time). Adolescents (N = 1,492, 50.8% female) were surveyed annually from Grade 9 to Grade 12 about their video game play and aggressive behaviors. Nonviolent video game play, frequency of overall video game play, and a comprehensive set of potential 3rd variables were included as covariates in each analysis. Sustained violent video game play was significantly related to steeper increases in adolescents' trajectory of aggressive behavior over time. Moreover, greater violent video game play predicted higher levels of aggression over time, after controlling for previous levels of aggression, supporting the socialization hypothesis. In contrast, no support was found for the selection hypothesis. Nonviolent video game play also did not predict higher levels of aggressive behavior over time. Our findings, and the fact that many adolescents play video games for several hours every day, underscore the need for a greater understanding of the long-term relation between violent video games and aggression, as well as the specific game characteristics (e.g., violent content, competition, pace of action) that may be responsible for this association.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23595418','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23595418"><span>Demolishing the competition: the longitudinal link between competitive video games, competitive gambling, and aggression.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Adachi, Paul J C; Willoughby, Teena</p> <p>2013-07-01</p> <p>The majority of research on the link between video games and aggression has focused on the violent content in games. In contrast, recent experimental research suggests that it is video game competition, not violence, that has the greatest effect on aggression in the short-term. However, no researchers have examined the long-term relationship between video game competition and aggression. In addition, if competition in video games is a significant reason for the link between video game play and aggression, then other competitive activities, such as competitive gambling, also may predict aggression over time. In the current study, we directly assessed the socialization (competitive video game play and competitive gambling predicts aggression over time) versus selection hypotheses (aggression predicts competitive video game play and competitive gambling over time). Adolescents (N = 1,492, 50.8 % female) were surveyed annually from Grade 9 to Grade 12 about their video game play, gambling, and aggressive behaviors. Greater competitive video game play and competitive gambling predicted higher levels of aggression over time, after controlling for previous levels of aggression, supporting the socialization hypothesis. The selection hypothesis also was supported, as aggression predicted greater competitive video game play and competitive gambling over time, after controlling for previous competitive video game play and competitive gambling. Our findings, taken together with the fact that millions of adolescents play competitive video games every day and that competitive gambling may increase as adolescents transition into adulthood, highlight the need for a greater understanding of the relationship between competition and aggression.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29180154','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29180154"><span>Student use of flipped classroom videos in a therapeutics course.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Patanwala, Asad E; Erstad, Brian L; Murphy, John E</p> <p></p> <p>To evaluate the extent of student use of flipped classroom videos. This was a cross-sectional study conducted in a college of pharmacy therapeutics course in the Unites States. In one section of the course (four sessions) all content was provided in the form of lecture videos that students had to watch prior to class. Class time was spent discussing patient cases. For half of the sessions, there was an electronic quiz due prior to class. The outcome measure was video view time in minutes. Adequate video view time was defined as viewing ≥75% of total video duration. Video view time was compared with or without quizzes using the Wilcoxon signed-rank test. There were 100 students in the class and all were included in the study. Overall, 74 students had adequate video view time prior to session 1, which decreased to 53 students for session 2, 53 students for session 3, and 36 students for session 4. Median video view time was greater when a quiz was required [80 minutes (IQR: 38-114) versus 69 minutes (IQR: 3-105), p < 0.001]. The mean score on the exam was 84 ± 8 points (out of 100). There was a significant association between video view time (per 50% increment) and score on the exam (coefficient 2.52; 95% CI: 0.79-4.26; p = 0.005; model R 2 = 7.8%). Student preparation prior to the flipped classroom is low and decreases with time. Preparation is higher when there is a quiz required. Copyright © 2016 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20090020579','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20090020579"><span>System Synchronizes Recordings from Separated Video Cameras</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.</p> <p>2009-01-01</p> <p>A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED025458.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED025458.pdf"><span>Video Processes in Teacher Education Programs; Scope, Techniques, and Assessment. Multi-State Teacher Education Project, Monograph III.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Bosley, Howard E.; And Others</p> <p></p> <p>"Video Processes Are Changing Teacher Education" by Howard Bosley (the first of five papers comprising this document) discusses the Multi-State Teacher Education Project (M-STEP) experimentation with media; it lists various uses of video processes, concentrating specifically on microteaching and the use of simulation and critical…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10396E..0XG','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10396E..0XG"><span>Segment scheduling method for reducing 360° video streaming latency</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gudumasu, Srinivas; Asbun, Eduardo; He, Yong; Ye, Yan</p> <p>2017-09-01</p> <p>360° video is an emerging new format in the media industry enabled by the growing availability of virtual reality devices. It provides the viewer a new sense of presence and immersion. Compared to conventional rectilinear video (2D or 3D), 360° video poses a new and difficult set of engineering challenges on video processing and delivery. Enabling comfortable and immersive user experience requires very high video quality and very low latency, while the large video file size poses a challenge to delivering 360° video in a quality manner at scale. Conventionally, 360° video represented in equirectangular or other projection formats can be encoded as a single standards-compliant bitstream using existing video codecs such as H.264/AVC or H.265/HEVC. Such method usually needs very high bandwidth to provide an immersive user experience. While at the client side, much of such high bandwidth and the computational power used to decode the video are wasted because the user only watches a small portion (i.e., viewport) of the entire picture. Viewport dependent 360°video processing and delivery approaches spend more bandwidth on the viewport than on non-viewports and are therefore able to reduce the overall transmission bandwidth. This paper proposes a dual buffer segment scheduling algorithm for viewport adaptive streaming methods to reduce latency when switching between high quality viewports in 360° video streaming. The approach decouples the scheduling of viewport segments and non-viewport segments to ensure the viewport segment requested matches the latest user head orientation. A base layer buffer stores all lower quality segments, and a viewport buffer stores high quality viewport segments corresponding to the most recent viewer's head orientation. The scheduling scheme determines viewport requesting time based on the buffer status and the head orientation. This paper also discusses how to deploy the proposed scheduling design for various viewport adaptive video streaming methods. The proposed dual buffer segment scheduling method is implemented in an end-to-end tile based 360° viewports adaptive video streaming platform, where the entire 360° video is divided into a number of tiles, and each tile is independently encoded into multiple quality level representations. The client requests different quality level representations of each tile based on the viewer's head orientation and the available bandwidth, and then composes all tiles together for rendering. The simulation results verify that the proposed dual buffer segment scheduling algorithm reduces the viewport switch latency, and utilizes available bandwidth more efficiently. As a result, a more consistent immersive 360° video viewing experience can be presented to the user.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21310215','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21310215"><span>Cognitive integration of asynchronous natural or non-natural auditory and visual information in videos of real-world events: an event-related potential study.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, B; Wang, Z; Wu, G; Meng, X</p> <p>2011-04-28</p> <p>In this paper, we aim to study the cognitive integration of asynchronous natural or non-natural auditory and visual information in videos of real-world events. Videos with asynchronous semantically consistent or inconsistent natural sound or speech were used as stimuli in order to compare the difference and similarity between multisensory integrations of videos with asynchronous natural sound and speech. The event-related potential (ERP) results showed that N1 and P250 components were elicited irrespective of whether natural sounds were consistent or inconsistent with critical actions in videos. Videos with inconsistent natural sound could elicit N400-P600 effects compared to videos with consistent natural sound, which was similar to the results from unisensory visual studies. Videos with semantically consistent or inconsistent speech could both elicit N1 components. Meanwhile, videos with inconsistent speech would elicit N400-LPN effects in comparison with videos with consistent speech, which showed that this semantic processing was probably related to recognition memory. Moreover, the N400 effect elicited by videos with semantically inconsistent speech was larger and later than that elicited by videos with semantically inconsistent natural sound. Overall, multisensory integration of videos with natural sound or speech could be roughly divided into two stages. For the videos with natural sound, the first stage might reflect the connection between the received information and the stored information in memory; and the second one might stand for the evaluation process of inconsistent semantic information. For the videos with speech, the first stage was similar to the first stage of videos with natural sound; while the second one might be related to recognition memory process. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1997SPIE.3031..780K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1997SPIE.3031..780K"><span>Implementation of MPEG-2 encoder to multiprocessor system using multiple MVPs (TMS320C80)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kim, HyungSun; Boo, Kenny; Chung, SeokWoo; Choi, Geon Y.; Lee, YongJin; Jeon, JaeHo; Park, Hyun Wook</p> <p>1997-05-01</p> <p>This paper presents the efficient algorithm mapping for the real-time MPEG-2 encoding on the KAIST image computing system (KICS), which has a parallel architecture using five multimedia video processors (MVPs). The MVP is a general purpose digital signal processor (DSP) of Texas Instrument. It combines one floating-point processor and four fixed- point DSPs on a single chip. The KICS uses the MVP as a primary processing element (PE). Two PEs form a cluster, and there are two processing clusters in the KICS. Real-time MPEG-2 encoder is implemented through the spatial and the functional partitioning strategies. Encoding process of spatially partitioned half of the video input frame is assigned to ne processing cluster. Two PEs perform the functionally partitioned MPEG-2 encoding tasks in the pipelined operation mode. One PE of a cluster carries out the transform coding part and the other performs the predictive coding part of the MPEG-2 encoding algorithm. One MVP among five MVPs is used for system control and interface with host computer. This paper introduces an implementation of the MPEG-2 algorithm with a parallel processing architecture.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_16 --> <div id="page_17" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="321"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015SPIE.9443E..2BH','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015SPIE.9443E..2BH"><span>Real-time video analysis for retail stores</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hassan, Ehtesham; Maurya, Avinash K.</p> <p>2015-03-01</p> <p>With the advancement in video processing technologies, we can capture subtle human responses in a retail store environment which play decisive role in the store management. In this paper, we present a novel surveillance video based analytic system for retail stores targeting localized and global traffic estimate. Development of an intelligent system for human traffic estimation in real-life poses a challenging problem because of the variation and noise involved. In this direction, we begin with a novel human tracking system by an intelligent combination of motion based and image level object detection. We demonstrate the initial evaluation of this approach on available standard dataset yielding promising result. Exact traffic estimate in a retail store require correct separation of customers from service providers. We present a role based human classification framework using Gaussian mixture model for this task. A novel feature descriptor named graded colour histogram is defined for object representation. Using, our role based human classification and tracking system, we have defined a novel computationally efficient framework for two types of analytics generation i.e., region specific people count and dwell-time estimation. This system has been extensively evaluated and tested on four hours of real-life video captured from a retail store.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20090029279','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20090029279"><span>Enhanced Video-Oculography System</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Moore, Steven T.; MacDougall, Hamish G.</p> <p>2009-01-01</p> <p>A previously developed video-oculography system has been enhanced for use in measuring vestibulo-ocular reflexes of a human subject in a centrifuge, motor vehicle, or other setting. The system as previously developed included a lightweight digital video camera mounted on goggles. The left eye was illuminated by an infrared light-emitting diode via a dichroic mirror, and the camera captured images of the left eye in infrared light. To extract eye-movement data, the digitized video images were processed by software running in a laptop computer. Eye movements were calibrated by having the subject view a target pattern, fixed with respect to the subject s head, generated by a goggle-mounted laser with a diffraction grating. The system as enhanced includes a second camera for imaging the scene from the subject s perspective, and two inertial measurement units (IMUs) for measuring linear accelerations and rates of rotation for computing head movements. One IMU is mounted on the goggles, the other on the centrifuge or vehicle frame. All eye-movement and head-motion data are time-stamped. In addition, the subject s point of regard is superimposed on each scene image to enable analysis of patterns of gaze in real time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20634267','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20634267"><span>"I'll be your cigarette--light me up and get on with it": examining smoking imagery on YouTube.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Forsyth, Susan R; Malone, Ruth E</p> <p>2010-08-01</p> <p>Smoking imagery on the online video sharing site YouTube is prolific and easily accessed. However, no studies have examined how this content changes across time. We studied the primary message and genre of YouTube videos about smoking across two time periods. In May and July 2009, we used "cigarettes" and "smoking cigarettes" to retrieve the top 20 videos on YouTube by relevance and view count. Eliminating duplicates, 124 videos were coded for time period, overall message, genre, and brand mentions. Data were analyzed using descriptive statistics. Videos portraying smoking positively far outnumbered smoking-negative videos in both samples, increasing as a percentage of total views across the time period. Fifty-eight percent of videos in the second sample were new. Among smoking-positive videos, music and magic tricks were most numerous, increasing from 66% to nearly 80% in July, with music accounting for most of the increase. Marlboro was the most frequently mentioned brand. Videos portraying smoking positively predominate on YouTube, and this pattern persists across time. Tobacco control advocates could use YouTube more effectively to counterbalance prosmoking messages.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2910874','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2910874"><span>“I'll be your cigarette—Light me up and get on with it”: Examining smoking imagery on YouTube</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Forsyth, Susan R.</p> <p>2010-01-01</p> <p>Introduction: Smoking imagery on the online video sharing site YouTube is prolific and easily accessed. However, no studies have examined how this content changes across time. We studied the primary message and genre of YouTube videos about smoking across two time periods. Methods: In May and July 2009, we used “cigarettes” and “smoking cigarettes” to retrieve the top 20 videos on YouTube by relevance and view count. Eliminating duplicates, 124 videos were coded for time period, overall message, genre, and brand mentions. Data were analyzed using descriptive statistics. Results: Videos portraying smoking positively far outnumbered smoking-negative videos in both samples, increasing as a percentage of total views across the time period. Fifty-eight percent of videos in the second sample were new. Among smoking-positive videos, music and magic tricks were most numerous, increasing from 66% to nearly 80% in July, with music accounting for most of the increase. Marlboro was the most frequently mentioned brand. Discussion: Videos portraying smoking positively predominate on YouTube, and this pattern persists across time. Tobacco control advocates could use YouTube more effectively to counterbalance prosmoking messages. PMID:20634267</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27117715','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27117715"><span>Use of video to facilitate sideline concussion diagnosis and management decision-making.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Davis, Gavin; Makdissi, Michael</p> <p>2016-11-01</p> <p>Video analysis can provide critical information to improve diagnostic accuracy and speed of clinical decision-making in potential cases of concussion. The objective of this study was to validate a hierarchical flowchart for the assessment of video signs of concussion, and to determine whether its implementation could improve the process of game day video assessment. Prospective cohort study. All impacts and collisions potentially resulting in a concussion were identified during 2012 and 2013 Australian Football League (AFL) seasons. Consensus definitions were developed for clinical signs associated with concussion. A hierarchical flowchart was developed based on the reliability and validity of the video signs of concussion. Ninety videos were assessed, with 45 incidents of clinically confirmed concussion, and 45 cases where no concussion was sustained. Each video was examined using the hierarchical flowchart, and a single response was given for each video based on the highest-ranking element in the flowchart. No protective action, impact seizure, motor incoordination or blank/vacant look were the highest ranked video signs in almost half of the clinically confirmed concussions, but in only 8.8% of non-concussed individuals. The presence of facial injury, clutching at the head and slow to get up were the highest ranked sign in 77.7% of non-concussed individuals. This study suggests that the implementation of a flowchart model could improve timely assessment of concussion, and it identifies the video signs that should trigger automatic removal from play. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA292443','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA292443"><span>A Formative Evaluation of CU-SeeMe.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1995-02-01</p> <p>CU- SeeMe is a video conferencing software package that was designed and programmed at Cornell University. The program works with the TCP/IP network...protocol and allows two or more parties to conduct a real-time video conference with full audio support. In this paper we evaluate CU- SeeMe through...caused the problem and why This helps in the process of formulating solutions for observed usability problems. All the testing results are combined in the Appendix in an illustrated partial redesign of the CU- SeeMe Interface.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2001SPIE.4303..148L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2001SPIE.4303..148L"><span>Video semaphore decoding for free-space optical communication</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Last, Matthew; Fisher, Brian; Ezekwe, Chinwuba; Hubert, Sean M.; Patel, Sheetal; Hollar, Seth; Leibowitz, Brian S.; Pister, Kristofer S. J.</p> <p>2001-04-01</p> <p>Using teal-time image processing we have demonstrated a low bit-rate free-space optical communication system at a range of more than 20km with an average optical transmission power of less than 2mW. The transmitter is an autonomous one cubic inch microprocessor-controlled sensor node with a laser diode output. The receiver is a standard CCD camera with a 1-inch aperture lens, and both hardware and software implementations of the video semaphore decoding algorithm. With this system sensor data can be reliably transmitted 21 km form San Francisco to Berkeley.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20060017036','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20060017036"><span>Video Guidance Sensor System With Integrated Rangefinding</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Book, Michael L. (Inventor); Bryan, Thomas C. (Inventor); Howard, Richard T. (Inventor); Roe, Fred Davis, Jr. (Inventor); Bell, Joseph L. (Inventor)</p> <p>2006-01-01</p> <p>A video guidance sensor system for use, p.g., in automated docking of a chase vehicle with a target vehicle. The system includes an integrated rangefinder sub-system that uses time of flight measurements to measure range. The rangefinder sub-system includes a pair of matched photodetectors for respectively detecting an output laser beam and return laser beam, a buffer memory for storing the photodetector outputs, and a digitizer connected to the buffer memory and including dual amplifiers and analog-to-digital converters. A digital signal processor processes the digitized output to produce a range measurement.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27417537','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27417537"><span>Baby FaceTime: can toddlers learn from online video chat?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Myers, Lauren J; LeWitt, Rachel B; Gallo, Renee E; Maselli, Nicole M</p> <p>2017-07-01</p> <p>There is abundant evidence for the 'video deficit': children under 2 years old learn better in person than from video. We evaluated whether these findings applied to video chat by testing whether children aged 12-25 months could form relationships with and learn from on-screen partners. We manipulated social contingency: children experienced either real-time FaceTime conversations or pre-recorded Videos as the partner taught novel words, actions and patterns. Children were attentive and responsive in both conditions, but only children in the FaceTime group responded to the partner in a temporally synced manner. After one week, children in the FaceTime condition (but not the Video condition) preferred and recognized their Partner, learned more novel patterns, and the oldest children learned more novel words. Results extend previous studies to demonstrate that children under 2 years show social and cognitive learning from video chat because it retains social contingency. A video abstract of this article can be viewed at: https://youtu.be/rTXaAYd5adA. © 2016 John Wiley & Sons Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED378935.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED378935.pdf"><span>Desktop Video Productions. ICEM Guidelines Publications No. 6.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Taufour, P. A.</p> <p></p> <p>Desktop video consists of integrating the processing of the video signal in a microcomputer. This definition implies that desktop video can take multiple forms such as virtual editing or digital video. Desktop video, which does not imply any particular technology, has been approached in different ways in different technical fields. It remains a…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016RScI...87c3705S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016RScI...87c3705S"><span>Optical cell tracking analysis using a straight-forward approach to minimize processing time for high frame rate data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Seeto, Wen Jun; Lipke, Elizabeth Ann</p> <p>2016-03-01</p> <p>Tracking of rolling cells via in vitro experiment is now commonly performed using customized computer programs. In most cases, two critical challenges continue to limit analysis of cell rolling data: long computation times due to the complexity of tracking algorithms and difficulty in accurately correlating a given cell with itself from one frame to the next, which is typically due to errors caused by cells that either come close in proximity to each other or come in contact with each other. In this paper, we have developed a sophisticated, yet simple and highly effective, rolling cell tracking system to address these two critical problems. This optical cell tracking analysis (OCTA) system first employs ImageJ for cell identification in each frame of a cell rolling video. A custom MATLAB code was written to use the geometric and positional information of all cells as the primary parameters for matching each individual cell with itself between consecutive frames and to avoid errors when tracking cells that come within close proximity to one another. Once the cells are matched, rolling velocity can be obtained for further analysis. The use of ImageJ for cell identification eliminates the need for high level MATLAB image processing knowledge. As a result, only fundamental MATLAB syntax is necessary for cell matching. OCTA has been implemented in the tracking of endothelial colony forming cell (ECFC) rolling under shear. The processing time needed to obtain tracked cell data from a 2 min ECFC rolling video recorded at 70 frames per second with a total of over 8000 frames is less than 6 min using a computer with an Intel® Core™ i7 CPU 2.80 GHz (8 CPUs). This cell tracking system benefits cell rolling analysis by substantially reducing the time required for post-acquisition data processing of high frame rate video recordings and preventing tracking errors when individual cells come in close proximity to one another.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2011-title47-vol4/pdf/CFR-2011-title47-vol4-sec79-3.pdf','CFR2011'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2011-title47-vol4/pdf/CFR-2011-title47-vol4-sec79-3.pdf"><span>47 CFR 79.3 - Video description of video programming.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2011&page.go=Go">Code of Federal Regulations, 2011 CFR</a></p> <p></p> <p>2011-10-01</p> <p>... description per calendar quarter, either during prime time or on children's programming; (2) Television... technical capability necessary to pass through the video description, unless using the technology for... video description per calendar quarter during prime time or on children's programming, on each channel...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2010-title47-vol4/pdf/CFR-2010-title47-vol4-sec79-3.pdf','CFR'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2010-title47-vol4/pdf/CFR-2010-title47-vol4-sec79-3.pdf"><span>47 CFR 79.3 - Video description of video programming.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2010&page.go=Go">Code of Federal Regulations, 2010 CFR</a></p> <p></p> <p>2010-10-01</p> <p>... description per calendar quarter, either during prime time or on children's programming; (2) Television... technical capability necessary to pass through the video description, unless using the technology for... video description per calendar quarter during prime time or on children's programming, on each channel...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011EJASP2011..122L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011EJASP2011..122L"><span>Real-time video quality monitoring</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Tao; Narvekar, Niranjan; Wang, Beibei; Ding, Ran; Zou, Dekun; Cash, Glenn; Bhagavathy, Sitaram; Bloom, Jeffrey</p> <p>2011-12-01</p> <p>The ITU-T Recommendation G.1070 is a standardized opinion model for video telephony applications that uses video bitrate, frame rate, and packet-loss rate to measure the video quality. However, this model was original designed as an offline quality planning tool. It cannot be directly used for quality monitoring since the above three input parameters are not readily available within a network or at the decoder. And there is a great room for the performance improvement of this quality metric. In this article, we present a real-time video quality monitoring solution based on this Recommendation. We first propose a scheme to efficiently estimate the three parameters from video bitstreams, so that it can be used as a real-time video quality monitoring tool. Furthermore, an enhanced algorithm based on the G.1070 model that provides more accurate quality prediction is proposed. Finally, to use this metric in real-world applications, we present an example emerging application of real-time quality measurement to the management of transmitted videos, especially those delivered to mobile devices.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014PPCF...56k4006C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014PPCF...56k4006C"><span>Overview of image processing tools to extract physical information from JET videos</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Craciunescu, T.; Murari, A.; Gelfusa, M.; Tiseanu, I.; Zoita, V.; EFDA Contributors, JET</p> <p>2014-11-01</p> <p>In magnetic confinement nuclear fusion devices such as JET, the last few years have witnessed a significant increase in the use of digital imagery, not only for the surveying and control of experiments, but also for the physical interpretation of results. More than 25 cameras are routinely used for imaging on JET in the infrared (IR) and visible spectral regions. These cameras can produce up to tens of Gbytes per shot and their information content can be very different, depending on the experimental conditions. However, the relevant information about the underlying physical processes is generally of much reduced dimensionality compared to the recorded data. The extraction of this information, which allows full exploitation of these diagnostics, is a challenging task. The image analysis consists, in most cases, of inverse problems which are typically ill-posed mathematically. The typology of objects to be analysed is very wide, and usually the images are affected by noise, low levels of contrast, low grey-level in-depth resolution, reshaping of moving objects, etc. Moreover, the plasma events have time constants of ms or tens of ms, which imposes tough conditions for real-time applications. On JET, in the last few years new tools and methods have been developed for physical information retrieval. The methodology of optical flow has allowed, under certain assumptions, the derivation of information about the dynamics of video objects associated with different physical phenomena, such as instabilities, pellets and filaments. The approach has been extended in order to approximate the optical flow within the MPEG compressed domain, allowing the manipulation of the large JET video databases and, in specific cases, even real-time data processing. The fast visible camera may provide new information that is potentially useful for disruption prediction. A set of methods, based on the extraction of structural information from the visual scene, have been developed for the automatic detection of MARFE (multifaceted asymmetric radiation from the edge) occurrences, which precede disruptions in density limit discharges. An original spot detection method has been developed for large surveys of videos in JET, and for the assessment of the long term trends in their evolution. The analysis of JET IR videos, recorded during JET operation with the ITER-like wall, allows the retrieval of data and hence correlation of the evolution of spots properties with macroscopic events, in particular series of intentional disruptions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhDT.......167G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhDT.......167G"><span>Bridging the Gap: Understanding Eye Movements and Attentional Mechanisms is Key to Improving Amblyopia Treatment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gambacorta, Christina Grace</p> <p></p> <p>Amblyopia is a developmental visual disorder resulting in sensory, motor and attentional deficits, including delays in both saccadic and manual reaction time. It is unclear whether this delay is due to differences in sensory processing of the stimulus, or the processes required to dis-engage/shift/re-engage attention when moving the eye from fixation to a saccadic target. In the first experiment we compare asymptotic saccadic and manual reaction times between the two eyes, using equivalent stimulus strength to account for differences in sensory processing. In a follow-up study, we modulate RT by removing the fixation dot, which is thought to release spatial attention at the fovea, and reduces reaction time in normal observers. Finally, we discuss the implications for these findings on future amblyopic treatment, specifically dichoptic video game playing. Playing videogames may help engage the attentional network, leading to greater improvements than traditional treatment of patching the non- amblyopic eye. Further, when treatment involves both eyes, fixation stability may be improved during the therapeutic intervention, yielding a better outcome than just playing a video game with a patch over the non-amblyopic eye.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014OptEn..53f3102J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014OptEn..53f3102J"><span>Bitstream decoding processor for fast entropy decoding of variable length coding-based multiformat videos</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jo, Hyunho; Sim, Donggyu</p> <p>2014-06-01</p> <p>We present a bitstream decoding processor for entropy decoding of variable length coding-based multiformat videos. Since most of the computational complexity of entropy decoders comes from bitstream accesses and table look-up process, the developed bitstream processing unit (BsPU) has several designated instructions to access bitstreams and to minimize branch operations in the table look-up process. In addition, the instruction for bitstream access has the capability to remove emulation prevention bytes (EPBs) of H.264/AVC without initial delay, repeated memory accesses, and additional buffer. Experimental results show that the proposed method for EPB removal achieves a speed-up of 1.23 times compared to the conventional EPB removal method. In addition, the BsPU achieves speed-ups of 5.6 and 3.5 times in entropy decoding of H.264/AVC and MPEG-4 Visual bitstreams, respectively, compared to an existing processor without designated instructions and a new table mapping algorithm. The BsPU is implemented on a Xilinx Virtex5 LX330 field-programmable gate array. The MPEG-4 Visual (ASP, Level 5) and H.264/AVC (Main Profile, Level 4) are processed using the developed BsPU with a core clock speed of under 250 MHz in real time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JASTP.172...24V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JASTP.172...24V"><span>Optical observations of electrical activity in cloud discharges</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vayanganie, S. P. A.; Fernando, M.; Sonnadara, U.; Cooray, V.; Perera, C.</p> <p>2018-07-01</p> <p>Temporal variation of the luminosity of seven natural cloud-to-cloud lightning channels were studied, and results were presented. They were recorded by using a high-speed video camera with the speed of 5000 fps (frames per second) and the pixel resolution of 512 × 512 in three locations in Sri Lanka in the tropics. Luminosity variation of the channel with time was obtained by analyzing the image sequences. Recorded video frames together with the luminosity variation were studied to understand the cloud discharge process. Image analysis techniques also used to understand the characteristics of channels. Cloud flashes show more luminosity variability than ground flashes. Most of the time it starts with a leader which do not have stepping process. Channel width and standard deviation of intensity variation across the channel for each cloud flashes was obtained. Brightness variation across the channel shows a Gaussian distribution. The average time duration of the cloud flashes which start with non stepped leader was 180.83 ms. Identified characteristics are matched with the existing models to understand the process of cloud flashes. The fact that cloud discharges are not confined to a single process have been further confirmed from this study. The observations show that cloud flash is a basic lightning discharge which transfers charge between two charge centers without using one specific mechanism.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED542697.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED542697.pdf"><span>Evaluation of EPE Videos in Different Phases of a Learning Process</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Kolas, Line; Munkvold, Robin; Nordseth, Hugo</p> <p>2012-01-01</p> <p>The goal of the paper is to present possible use of EPE videos in different phases of a learning and teaching process. The paper is based on an evaluation of EPE (easy production educational) videos. The evaluation framework used in this study, divides the teaching and learning process into four main phases: 1) The preparation phase, 2) The…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ISPAr42W3...99V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ISPAr42W3...99V"><span>a Cloud-Based Architecture for Smart Video Surveillance</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Valentín, L.; Serrano, S. A.; Oves García, R.; Andrade, A.; Palacios-Alonso, M. A.; Sucar, L. Enrique</p> <p>2017-09-01</p> <p>Turning a city into a smart city has attracted considerable attention. A smart city can be seen as a city that uses digital technology not only to improve the quality of people's life, but also, to have a positive impact in the environment and, at the same time, offer efficient and easy-to-use services. A fundamental aspect to be considered in a smart city is people's safety and welfare, therefore, having a good security system becomes a necessity, because it allows us to detect and identify potential risk situations, and then take appropriate decisions to help people or even prevent criminal acts. In this paper we present an architecture for automated video surveillance based on the cloud computing schema capable of acquiring a video stream from a set of cameras connected to the network, process that information, detect, label and highlight security-relevant events automatically, store the information and provide situational awareness in order to minimize response time to take the appropriate action.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_17 --> <div id="page_18" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="341"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26547406','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26547406"><span>Enhancing surgical safety using digital multimedia technology.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dixon, Jennifer L; Mukhopadhyay, Dhriti; Hunt, Justin; Jupiter, Daniel; Smythe, William R; Papaconstantinou, Harry T</p> <p>2016-06-01</p> <p>The purpose of this study was to examine whether incorporating digital and video multimedia components improved surgical time-out performance of a surgical safety checklist. A prospective pilot study was designed for implementation of a multimedia time-out, including a patient video. Perceptions of the staff participants were surveyed before and after intervention (Likert scale: 1, strongly disagree to 5, strongly agree). Employee satisfaction was high for both time-out procedures. However, employees appreciated improved clarity of patient identification (P < .05) and operative laterality (P < .05) with the digital method. About 87% of the respondents preferred the digital version to the standard time-out (75% anesthesia, 89% surgeons, 93% nursing). Although the duration of time-outs increased (49 and 79 seconds for standard and digital time-outs, respectively, P > .001), there was significant improvement in performance of key safety elements. The multimedia time-out allows improved participation by the surgical team and is preferred to a standard time-out process. Copyright © 2015 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12945930','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12945930"><span>A model for a PC-based, universal-format, multimedia digitization system: moving beyond the scanner.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>McEachen, James C; Cusack, Thomas J; McEachen, John C</p> <p>2003-08-01</p> <p>Digitizing images for use in case presentations based on hardcopy films, slides, photographs, negatives, books, and videos can present a challenging task. Scanners and digital cameras have become standard tools of the trade. Unfortunately, use of these devices to digitize multiple images in many different media formats can be a time-consuming and in some cases unachievable process. The authors' goal was to create a PC-based solution for digitizing multiple media formats in a timely fashion while maintaining adequate image presentation quality. The authors' PC-based solution makes use of off-the-shelf hardware applications to include a digital document camera (DDC), VHS video player, and video-editing kit. With the assistance of five staff radiologists, the authors examined the quality of multiple image types digitized with this equipment. The authors also quantified the speed of digitization of various types of media using the DDC and video-editing kit. With regard to image quality, the five staff radiologists rated the digitized angiography, CT, and MR images as adequate to excellent for use in teaching files and case presentations. With regard to digitized plain films, the average rating was adequate. As for performance, the authors recognized a 68% improvement in the time required to digitize hardcopy films using the DDC instead of a professional quality scanner. The PC-based solution provides a means for digitizing multiple images from many different types of media in a timely fashion while maintaining adequate image presentation quality.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25215212','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25215212"><span>Is video gaming, or video game addiction, associated with depression, academic achievement, heavy episodic drinking, or conduct problems?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Brunborg, Geir Scott; Mentzoni, Rune Aune; Frøyland, Lars Roar</p> <p>2014-03-01</p> <p>While the relationships between video game use and negative consequences are debated, the relationships between video game addiction and negative consequences are fairly well established. However, previous studies suffer from methodological weaknesses that may have caused biased results. There is need for further investigation that benefits from the use of methods that avoid omitted variable bias. Two wave panel data was used from two surveys of 1,928 Norwegian adolescents aged 13 to 17 years. The surveys included measures of video game use, video game addiction, depression, heavy episodic drinking, academic achievement, and conduct problems. The data was analyzed using first-differencing, a regression method that is unbiased by time invariant individual factors. Video game addiction was related to depression, lower academic achievement, and conduct problems, but time spent on video games was not related to any of the studied negative outcomes. The findings were in line with a growing number of studies that have failed to find relationships between time spent on video games and negative outcomes. The current study is also consistent with previous studies in that video game addiction was related to other negative outcomes, but it made the added contribution that the relationships are unbiased by time invariant individual effects. However, future research should aim at establishing the temporal order of the supposed causal effects. Spending time playing video games does not involve negative consequences, but adolescents who experience problems related to video games are likely to also experience problems in other facets of life.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4117274','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4117274"><span>Is video gaming, or video game addiction, associated with depression, academic achievement, heavy episodic drinking, or conduct problems?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Brunborg, Geir Scott; Mentzoni, Rune Aune; Frøyland, Lars Roar</p> <p>2014-01-01</p> <p>Background and aims: While the relationships between video game use and negative consequences are debated, the relationships between video game addiction and negative consequences are fairly well established. However, previous studies suffer from methodological weaknesses that may have caused biased results. There is need for further investigation that benefits from the use of methods that avoid omitted variable bias. Methods: Two wave panel data was used from two surveys of 1,928 Norwegian adolescents aged 13 to 17 years. The surveys included measures of video game use, video game addiction, depression, heavy episodic drinking, academic achievement, and conduct problems. The data was analyzed using first-differencing, a regression method that is unbiased by time invariant individual factors. Results: Video game addiction was related to depression, lower academic achievement, and conduct problems, but time spent on video games was not related to any of the studied negative outcomes. Discussion: The findings were in line with a growing number of studies that have failed to find relationships between time spent on video games and negative outcomes. The current study is also consistent with previous studies in that video game addiction was related to other negative outcomes, but it made the added contribution that the relationships are unbiased by time invariant individual effects. However, future research should aim at establishing the temporal order of the supposed causal effects. Conclusions: Spending time playing video games does not involve negative consequences, but adolescents who experience problems related to video games are likely to also experience problems in other facets of life. PMID:25215212</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26580936','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26580936"><span>[How to produce a video to promote HIV testing in men who have sex with men?].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Menacho, Luis A; Blas, Magaly M</p> <p>2015-01-01</p> <p>The aim of the study was to describe the process of designing and producing a video to promote HIV testing in Peruvian men who have sex with men (MSM). The process involved the following steps: identification of the theories of behavior change; identifying key messages and video features; developing a script that would captivate the target audience; working with an experienced production company; and piloting the video. A video with everyday situations of risk associated with HIV infection was the one preferred by participants. Key messages identified, and theoretical constructs models chosen were used to create the video scenes. Participants identified with the main, 9 minute video which they considered to be clear and dynamic. It is necessary to work with the target population to design a video according to their preferences.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PRPER..12b0121C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PRPER..12b0121C"><span>Video Observation as a Tool to Analyze and Modify an Electronics Laboratory</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Coppens, Pieter; Van den Bossche, Johan; De Cock, Mieke</p> <p>2016-12-01</p> <p>Laboratories are an important part of science and engineering education, especially in the field of electronics. Yet very little research into the benefits of such labs to student learning exists. In particular, it is not well known what students do and, even more importantly, think during electronics laboratories. Therefore, we conducted a study based on video observation of second year students at 3 university campuses in Belgium during a traditional lab on first order R C filters. In this laboratory, students spent the majority of their time performing measurements, while very little time was spent processing or discussing the results. This in turn resulted in hardly any time spent talking about content knowledge. Based on those observations, a new laboratory was designed that includes a preparation with a virtual oscilloscope, a black box approach during the lab session itself, and a form of quick reporting at the end of the lab. This adjusted laboratory was evaluated using the same methodology and was more successful in the sense that the students spent less time gathering measurements and more time processing and analyzing them, resulting in more content-based discussion.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://images.nasa.gov/#/details-9513780.html','SCIGOVIMAGE-NASA'); return false;" href="https://images.nasa.gov/#/details-9513780.html"><span>Microgravity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://images.nasa.gov/">NASA Image and Video Library</a></p> <p></p> <p>1994-07-10</p> <p>TEMPUS, an electromagnetic levitation facility that allows containerless processing of metallic samples in microgravity, first flew on the IML-2 Spacelab mission. The principle of electromagnetic levitation is used commonly in ground-based experiments to melt and then cool metallic melts below their freezing points without solidification occurring. The TEMPUS operation is controlled by its own microprocessor system; although commands may be sent remotely from the ground and real time adjustments may be made by the crew. Two video cameras, a two-color pyrometer for measuring sample temperatures, and a fast infrared detector for monitoring solidification spikes, will be mounted to the process chamber to facilitate observation and analysis. In addition, a dedicated high-resolution video camera can be attached to the TEMPUS to measure the sample volume precisely.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008SPIE.6811E..09L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008SPIE.6811E..09L"><span>Real-time people counting system using a single video camera</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lefloch, Damien; Cheikh, Faouzi A.; Hardeberg, Jon Y.; Gouton, Pierre; Picot-Clemente, Romain</p> <p>2008-02-01</p> <p>There is growing interest in video-based solutions for people monitoring and counting in business and security applications. Compared to classic sensor-based solutions the video-based ones allow for more versatile functionalities, improved performance with lower costs. In this paper, we propose a real-time system for people counting based on single low-end non-calibrated video camera. The two main challenges addressed in this paper are: robust estimation of the scene background and the number of real persons in merge-split scenarios. The latter is likely to occur whenever multiple persons move closely, e.g. in shopping centers. Several persons may be considered to be a single person by automatic segmentation algorithms, due to occlusions or shadows, leading to under-counting. Therefore, to account for noises, illumination and static objects changes, a background substraction is performed using an adaptive background model (updated over time based on motion information) and automatic thresholding. Furthermore, post-processing of the segmentation results is performed, in the HSV color space, to remove shadows. Moving objects are tracked using an adaptive Kalman filter, allowing a robust estimation of the objects future positions even under heavy occlusion. The system is implemented in Matlab, and gives encouraging results even at high frame rates. Experimental results obtained based on the PETS2006 datasets are presented at the end of the paper.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5067879','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5067879"><span>Manual Therapy Practices of Sobadores in North Carolina</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Graham, Alan; Sandberg, Joanne C.; Quandt, Sara A.; Mora, Dana C.</p> <p>2016-01-01</p> <p>Abstract Objectives: This analysis provides a description of the manual-therapy elements of sobadores practicing in North Carolina, using videotapes of patient treatment sessions. Design: Three sobadores allowed the video recording of eight patient treatment sessions (one each for two sobadores, six for the third sobador). Each of the recordings was reviewed by an experienced chiropractor who recorded the frequencies of seven defined manual-therapy elements: (1) treatment time; (2) patient position on treatment surface; (3) patient body part contacted by the sobador; (4) sobador examination methods; (5) primary treatment processes; (6) sobador body part area referencing patient; and (7) adjunctive treatment processes. Results: The range of treatment time of 9–30 min was similar to the treatment spectra that combine techniques used by conventional massage and manipulative practitioners. The patient positions on the treatment surface were not extraordinary, given the wide variety of treatment processes used, and indicated the sobadores treat patients in multiple positions. The patient body part contacted by the sobadores indicated that they were treating each of the major parts of the musculoskeletal system. Basic palpation dominated the sobadores' examination methods. The sobadores' primary treatment processes included significant variety, but rubbing was the dominant practice. The hands were the sobador body area that most often made contact with the patient. They all used lubricants. Conclusions: Sobadores' methods are similar to those of other manual-therapy practitioners. Additional study of video-recorded sobador practices is needed. Video-recorded practice of other traditional and conventional manual therapies for comparative analysis will help delineate the specific similarities and differences among the manual therapies. PMID:27400120</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22483144','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22483144"><span>The production of audiovisual teaching tools in minimally invasive surgery.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tolerton, Sarah K; Hugh, Thomas J; Cosman, Peter H</p> <p>2012-01-01</p> <p>Audiovisual learning resources have become valuable adjuncts to formal teaching in surgical training. This report discusses the process and challenges of preparing an audiovisual teaching tool for laparoscopic cholecystectomy. The relative value in surgical education and training, for both the creator and viewer are addressed. This audiovisual teaching resource was prepared as part of the Master of Surgery program at the University of Sydney, Australia. The different methods of video production used to create operative teaching tools are discussed. Collating and editing material for an audiovisual teaching resource can be a time-consuming and technically challenging process. However, quality learning resources can now be produced even with limited prior video editing experience. With minimal cost and suitable guidance to ensure clinically relevant content, most surgeons should be able to produce short, high-quality education videos of both open and minimally invasive surgery. Despite the challenges faced during production of audiovisual teaching tools, these resources are now relatively easy to produce using readily available software. These resources are particularly attractive to surgical trainees when real time operative footage is used. They serve as valuable adjuncts to formal teaching, particularly in the setting of minimally invasive surgery. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25594573','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25594573"><span>Massively parallel neural circuits for stereoscopic color vision: encoding, decoding and identification.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lazar, Aurel A; Slutskiy, Yevgeniy B; Zhou, Yiyin</p> <p>2015-03-01</p> <p>Past work demonstrated how monochromatic visual stimuli could be faithfully encoded and decoded under Nyquist-type rate conditions. Color visual stimuli were then traditionally encoded and decoded in multiple separate monochromatic channels. The brain, however, appears to mix information about color channels at the earliest stages of the visual system, including the retina itself. If information about color is mixed and encoded by a common pool of neurons, how can colors be demixed and perceived? We present Color Video Time Encoding Machines (Color Video TEMs) for encoding color visual stimuli that take into account a variety of color representations within a single neural circuit. We then derive a Color Video Time Decoding Machine (Color Video TDM) algorithm for color demixing and reconstruction of color visual scenes from spikes produced by a population of visual neurons. In addition, we formulate Color Video Channel Identification Machines (Color Video CIMs) for functionally identifying color visual processing performed by a spiking neural circuit. Furthermore, we derive a duality between TDMs and CIMs that unifies the two and leads to a general theory of neural information representation for stereoscopic color vision. We provide examples demonstrating that a massively parallel color visual neural circuit can be first identified with arbitrary precision and its spike trains can be subsequently used to reconstruct the encoded stimuli. We argue that evaluation of the functional identification methodology can be effectively and intuitively performed in the stimulus space. In this space, a signal reconstructed from spike trains generated by the identified neural circuit can be compared to the original stimulus. Copyright © 2014 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MSSP...85..567Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MSSP...85..567Y"><span>Blind identification of full-field vibration modes from video measurements with phase-based video motion magnification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yang, Yongchao; Dorn, Charles; Mancini, Tyler; Talken, Zachary; Kenyon, Garrett; Farrar, Charles; Mascareñas, David</p> <p>2017-02-01</p> <p>Experimental or operational modal analysis traditionally requires physically-attached wired or wireless sensors for vibration measurement of structures. This instrumentation can result in mass-loading on lightweight structures, and is costly and time-consuming to install and maintain on large civil structures, especially for long-term applications (e.g., structural health monitoring) that require significant maintenance for cabling (wired sensors) or periodic replacement of the energy supply (wireless sensors). Moreover, these sensors are typically placed at a limited number of discrete locations, providing low spatial sensing resolution that is hardly sufficient for modal-based damage localization, or model correlation and updating for larger-scale structures. Non-contact measurement methods such as scanning laser vibrometers provide high-resolution sensing capacity without the mass-loading effect; however, they make sequential measurements that require considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation, optical flow), video camera based measurements have been successfully used for vibration measurements and subsequent modal analysis, based on techniques such as the digital image correlation (DIC) and the point-tracking. However, they typically require speckle pattern or high-contrast markers to be placed on the surface of structures, which poses challenges when the measurement area is large or inaccessible. This work explores advanced computer vision and video processing algorithms to develop a novel video measurement and vision-based operational (output-only) modal analysis method that alleviate the need of structural surface preparation associated with existing vision-based methods and can be implemented in a relatively efficient and autonomous manner with little user supervision and calibration. First a multi-scale image processing method is applied on the frames of the video of a vibrating structure to extract the local pixel phases that encode local structural vibration, establishing a full-field spatiotemporal motion matrix. Then a high-spatial dimensional, yet low-modal-dimensional, over-complete model is used to represent the extracted full-field motion matrix using modal superposition, which is physically connected and manipulated by a family of unsupervised learning models and techniques, respectively. Thus, the proposed method is able to blindly extract modal frequencies, damping ratios, and full-field (as many points as the pixel number of the video frame) mode shapes from line of sight video measurements of the structure. The method is validated by laboratory experiments on a bench-scale building structure and a cantilever beam. Its ability for output (video measurements)-only identification and visualization of the weakly-excited mode is demonstrated and several issues with its implementation are discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19870024693&hterms=deep+processing+time&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Ddeep%2Bprocessing%2Btime','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19870024693&hterms=deep+processing+time&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Ddeep%2Bprocessing%2Btime"><span>Bi-telescopic, deep, simultaneous meteor observations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Taff, L. G.</p> <p>1986-01-01</p> <p>A statistical summary is presented of 10 hours of observing sporadic meteors and two meteor showers using the Experimental Test System of the Lincoln Laboratory. The observatory is briefly described along with the real-time and post-processing hardware, the analysis, and the data reduction. The principal observational results are given for the sporadic meteor zenithal hourly rates. The unique properties of the observatory include twin telescopes to allow the discrimination of meteors by parallax, deep limiting magnitude, good time resolution, and sophisticated real-time and post-observing video processing.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015SPIE.9407E..0US','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015SPIE.9407E..0US"><span>Visual analysis of trash bin processing on garbage trucks in low resolution video</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sidla, Oliver; Loibner, Gernot</p> <p>2015-03-01</p> <p>We present a system for trash can detection and counting from a camera which is mounted on a garbage collection truck. A working prototype has been successfully implemented and tested with several hours of real-world video. The detection pipeline consists of HOG detectors for two trash can sizes, and meanshift tracking and low level image processing for the analysis of the garbage disposal process. Considering the harsh environment and unfavorable imaging conditions, the process works already good enough so that very useful measurements from video data can be extracted. The false positive/false negative rate of the full processing pipeline is about 5-6% at fully automatic operation. Video data of a full day (about 8 hrs) can be processed in about 30 minutes on a standard PC.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4134910','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4134910"><span>Testing the Effects of the Addition of Videos to a Website Promoting Environmental Breast Cancer Risk Reduction Practices: Are Videos Worth It?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Perrault, Evan K.; Silk, Kami J.</p> <p>2013-01-01</p> <p>Searching for ways to reach wider audiences in more comprehensible ways, health promotion agencies might add videos to their current web offerings. While potentially costly and time consuming to create, the effect of these videos on websites has not received much attention. This study translated research about the potential breast cancer risk for young girls associated with the household chemical PFOA into two websites mothers with young daughters were randomly assigned to view (website with videos vs. website without videos). Results revealed participants in the video condition found the advocated risk protective behaviors easier to perform and stated they were more likely to perform them than those in the non-video condition. Approximately 15 days after exposure, those in the video condition performed on average one more protective behavior than those in the non-video condition. Results also suggest that agencies’ efforts should focus on creating one quality video to place on a homepage, as video views declined the deeper people navigated into the site. Behaviors advocated should also be ones that can have lasting impacts with one-time actions, as effects wore away over time. Additional strategies are discussed for health promoters seeking to create videos to add to their current websites. PMID:25143661</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23224593','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23224593"><span>Teaching social-communication skills to preschoolers with autism: efficacy of video versus in vivo modeling in the classroom.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wilson, Kaitlyn P</p> <p>2013-08-01</p> <p>Video modeling is a time- and cost-efficient intervention that has been proven effective for children with autism spectrum disorder (ASD); however, the comparative efficacy of this intervention has not been examined in the classroom setting. The present study examines the relative efficacy of video modeling as compared to the more widely-used strategy of in vivo modeling using an alternating treatments design with baseline and replication across four preschool-aged students with ASD. Results offer insight into the heterogeneous treatment response of students with ASD. Additional data reflecting visual attention and social validity were captured to further describe participants' learning preferences and processes, as well as educators' perceptions of the acceptability of each intervention's procedures in the classroom setting.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9871E..0BF','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9871E..0BF"><span>HEVC optimizations for medical environments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fernández, D. G.; Del Barrio, A. A.; Botella, Guillermo; García, Carlos; Meyer-Baese, Uwe; Meyer-Baese, Anke</p> <p>2016-05-01</p> <p>HEVC/H.265 is the most interesting and cutting-edge topic in the world of digital video compression, allowing to reduce by half the required bandwidth in comparison with the previous H.264 standard. Telemedicine services and in general any medical video application can benefit from the video encoding advances. However, the HEVC is computationally expensive to implement. In this paper a method for reducing the HEVC complexity in the medical environment is proposed. The sequences that are typically processed in this context contain several homogeneous regions. Leveraging these regions, it is possible to simplify the HEVC flow while maintaining a high-level quality. In comparison with the HM16.2 standard, the encoding time is reduced up to 75%, with a negligible quality loss. Moreover, the algorithm is straightforward to implement in any hardware platform.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhDT.......106S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhDT.......106S"><span>Flight State Information Inference with Application to Helicopter Cockpit Video Data Analysis Using Data Mining Techniques</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shin, Sanghyun</p> <p></p> <p>The National Transportation Safety Board (NTSB) has recently emphasized the importance of analyzing flight data as one of the most effective methods to improve eciency and safety of helicopter operations. By analyzing flight data with Flight Data Monitoring (FDM) programs, the safety and performance of helicopter operations can be evaluated and improved. In spite of the NTSB's effort, the safety of helicopter operations has not improved at the same rate as the safety of worldwide airlines, and the accident rate of helicopters continues to be much higher than that of fixed-wing aircraft. One of the main reasons is that the participation rates of the rotorcraft industry in the FDM programs are low due to the high costs of the Flight Data Recorder (FDR), the need of a special readout device to decode the FDR, anxiety of punitive action, etc. Since a video camera is easily installed, accessible, and inexpensively maintained, cockpit video data could complement the FDR in the presence of the FDR or possibly replace the role of the FDR in the absence of the FDR. Cockpit video data is composed of image and audio data: image data contains outside views through cockpit windows and activities on the flight instrument panels, whereas audio data contains sounds of the alarms within the cockpit. The goal of this research is to develop, test, and demonstrate a cockpit video data analysis algorithm based on data mining and signal processing techniques that can help better understand situations in the cockpit and the state of a helicopter by efficiently and accurately inferring the useful flight information from cockpit video data. Image processing algorithms based on data mining techniques are proposed to estimate a helicopter's attitude such as the bank and pitch angles, identify indicators from a flight instrument panel, and read the gauges and the numbers in the analogue gauge indicators and digital displays from cockpit image data. In addition, an audio processing algorithm based on signal processing and abrupt change detection techniques is proposed to identify types of warning alarms and to detect the occurrence times of individual alarms from cockpit audio data. Those proposed algorithms are then successfully applied to simulated and real helicopter cockpit video data to demonstrate and validate their performance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/AD1011094','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/AD1011094"><span>Complex Event Processing for Content-Based Text, Image, and Video Retrieval</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2016-06-01</p> <p>NY): Wiley- Interscience; 2000. Feldman R, Sanger J. The text mining handbook: advanced approaches in analyzing unstructured data. New York (NY...ARL-TR-7705 ● JUNE 2016 US Army Research Laboratory Complex Event Processing for Content-Based Text , Image, and Video Retrieval...ARL-TR-7705 ● JUNE 2016 US Army Research Laboratory Complex Event Processing for Content-Based Text , Image, and Video Retrieval</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ISPAr.422..259C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ISPAr.422..259C"><span>Cultural Heritage Reconstruction from Historical Photographs and Videos</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Condorelli, F.; Rinaudo, F.</p> <p>2018-05-01</p> <p>Historical archives save invaluable treasures and play a critical role in the conservation of Cultural Heritage. Old photographs and videos, which have survived over time and stored in these archives, preserve traces of architecture and urban transformation and, in many cases, are the only evidence of buildings that no longer exist. They are a precious source of enormous informative potential in Cultural Heritage documentation and save invaluable treasures. Thanks to photogrammetric techniques it is possible to extract metric information from these sources useful for 3D virtual reconstructions of monuments and historic buildings. This paper explores the ways to search for, classify and group historical data by considering their possible use in metric documentation and aims to provide an overview of criticality and open issues of the methodologies that could be used to process these data. A practical example is described and presented as a case study. The video "Torino 1928", an old movie dating from the 1930s, was processed for reconstructing the temporary pavilions of the "Exposition" held in Turin in 1928. Despite the initial concerns relating to processing this kind of data, the experimental methodology used in this research has allowed to reach a quality of results of acceptable standard.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_18 --> <div id="page_19" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="361"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2004SPIE.5367..705C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2004SPIE.5367..705C"><span>Platform for intraoperative analysis of video streams</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Clements, Logan; Galloway, Robert L., Jr.</p> <p>2004-05-01</p> <p>Interactive, image-guided surgery (IIGS) has proven to increase the specificity of a variety of surgical procedures. However, current IIGS systems do not compensate for changes that occur intraoperatively and are not reflected in preoperative tomograms. Endoscopes and intraoperative ultrasound, used in minimally invasive surgery, provide real-time (RT) information in a surgical setting. Combining the information from RT imaging modalities with traditional IIGS techniques will further increase surgical specificity by providing enhanced anatomical information. In order to merge these techniques and obtain quantitative data from RT imaging modalities, a platform was developed to allow both the display and processing of video streams in RT. Using a Bandit-II CV frame grabber board (Coreco Imaging, St. Laurent, Quebec) and the associated library API, a dynamic link library was created in Microsoft Visual C++ 6.0 such that the platform could be incorporated into the IIGS system developed at Vanderbilt University. Performance characterization, using two relatively inexpensive host computers, has shown the platform capable of performing simple image processing operations on frames captured from a CCD camera and displaying the processed video data at near RT rates both independent of and while running the IIGS system.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1992SPIE.1614..194G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1992SPIE.1614..194G"><span>Detection, location, and quantification of structural damage by neural-net-processed moiré profilometry</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Grossman, Barry G.; Gonzalez, Frank S.; Blatt, Joel H.; Hooker, Jeffery A.</p> <p>1992-03-01</p> <p>The development of efficient high speed techniques to recognize, locate, and quantify damage is vitally important for successful automated inspection systems such as ones used for the inspection of undersea pipelines. Two critical problems must be solved to achieve these goals: the reduction of nonuseful information present in the video image and automatic recognition and quantification of extent and location of damage. Artificial neural network processed moire profilometry appears to be a promising technique to accomplish this. Real time video moire techniques have been developed which clearly distinguish damaged and undamaged areas on structures, thus reducing the amount of extraneous information input into an inspection system. Artificial neural networks have demonstrated advantages for image processing, since they can learn the desired response to a given input and are inherently fast when implemented in hardware due to their parallel computing architecture. Video moire images of pipes with dents of different depths were used to train a neural network, with the desired output being the location and severity of the damage. The system was then successfully tested with a second series of moire images. The techniques employed and the results obtained are discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/9630060','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/9630060"><span>Variability in the skin exposure of machine operators exposed to cutting fluids.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wassenius, O; Järvholm, B; Engström, T; Lillienberg, L; Meding, B</p> <p>1998-04-01</p> <p>This study describes a new technique for measuring skin exposure to cutting fluids and evaluates the variability of skin exposure among machine operators performing cyclic (repetitive) work. The technique is based on video recording and subsequent analysis of the video tape by means of computer-synchronized video equipment. The time intervals at which the machine operator's hand was exposed to fluid were registered, and the total wet time of the skin was calculated by assuming different evaporation times for the fluid. The exposure of 12 operators with different work methods was analyzed in 6 different workshops, which included a range of machine types, from highly automated metal cutting machines (ie, actual cutting and chip removal machines) requiring operator supervision to conventional metal cutting machines, where the operator was required to maneuver the machine and manually exchange products. The relative wet time varied between 0% and 100%. A significant association between short cycle time and high relative wet time was noted. However, there was no relationship between the degree of automatization of the metal cutting machines and wet time. The study shows that skin exposure to cutting fluids can vary considerably between machine operators involved in manufacturing processes using different types of metal cutting machines. The machine type was not associated with dermal wetness. The technique appears to give objective information about dermal wetness.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10615E..29Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10615E..29Z"><span>An unsupervised video foreground co-localization and segmentation process by incorporating motion cues and frame features</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Chao; Zhang, Qian; Zheng, Chi; Qiu, Guoping</p> <p>2018-04-01</p> <p>Video foreground segmentation is one of the key problems in video processing. In this paper, we proposed a novel and fully unsupervised approach for foreground object co-localization and segmentation of unconstrained videos. We firstly compute both the actual edges and motion boundaries of the video frames, and then align them by their HOG feature maps. Then, by filling the occlusions generated by the aligned edges, we obtained more precise masks about the foreground object. Such motion-based masks could be derived as the motion-based likelihood. Moreover, the color-base likelihood is adopted for the segmentation process. Experimental Results show that our approach outperforms most of the State-of-the-art algorithms.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3769445','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3769445"><span>Development and Pilot Testing of a Video-Assisted Informed Consent Process</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Sonne, Susan C.; Andrews, Jeannette O.; Gentilin, Stephanie M.; Oppenheimer, Stephanie; Obeid, Jihad; Brady, Kathleen; Wolf, Sharon; Davis, Randal; Magruder, Kathryn</p> <p>2013-01-01</p> <p>The informed consent process for research has come under scrutiny, as consent documents are increasingly long and difficult to understand. Innovations are needed to improve comprehension in order to make the consent process truly informed. We report on the development and pilot testing of video clips that could be used during the consent process to better explain research procedures to potential participants. Based on input from researchers and community partners, 15 videos of common research procedures/concepts were produced. The utility of the videos was then tested by embedding them in mock informed consent documents that were presented via an online electronic consent system designed for delivery via iPad. Three mock consents were developed, each containing five videos. All participants (n=61) read both a paper version and the video-assisted iPad version of the same mock consent and were randomized to which format they reviewed first. Participants were given a competency quiz that posed specific questions about the information in the consent after reviewing the first consent document to which they were exposed. Most participants (78.7%) preferred the video-assisted format compared to paper (12.9%). Nearly all (96.7%) reported that the videos improved their understanding of the procedures described in the consent document; however, comprehension of material did not significantly differ by consent format. Results suggest videos may be helpful in providing participants with information about study procedures in a way that is easy to understand. Additional testing of video consents for complex protocols and with subjects of lower literacy is warranted. PMID:23747986</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23747986','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23747986"><span>Development and pilot testing of a video-assisted informed consent process.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sonne, Susan C; Andrews, Jeannette O; Gentilin, Stephanie M; Oppenheimer, Stephanie; Obeid, Jihad; Brady, Kathleen; Wolf, Sharon; Davis, Randal; Magruder, Kathryn</p> <p>2013-09-01</p> <p>The informed consent process for research has come under scrutiny, as consent documents are increasingly long and difficult to understand. Innovations are needed to improve comprehension in order to make the consent process truly informed. We report on the development and pilot testing of video clips that could be used during the consent process to better explain research procedures to potential participants. Based on input from researchers and community partners, 15 videos of common research procedures/concepts were produced. The utility of the videos was then tested by embedding them in mock-informed consent documents that were presented via an online electronic consent system designed for delivery via iPad. Three mock consents were developed, each containing five videos. All participants (n = 61) read both a paper version and the video-assisted iPad version of the same mock consent and were randomized to which format they reviewed first. Participants were given a competency quiz that posed specific questions about the information in the consent after reviewing the first consent document to which they were exposed. Most participants (78.7%) preferred the video-assisted format compared to paper (12.9%). Nearly all (96.7%) reported that the videos improved their understanding of the procedures described in the consent document; however, the comprehension of material did not significantly differ by consent format. Results suggest videos may be helpful in providing participants with information about study procedures in a way that is easy to understand. Additional testing of video consents for complex protocols and with subjects of lower literacy is warranted. Copyright © 2013 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20090011869','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20090011869"><span>Automated Production of Movies on a Cluster of Computers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Nail, Jasper; Le, Duong; Nail, William L.; Nail, William</p> <p>2008-01-01</p> <p>A method of accelerating and facilitating production of video and film motion-picture products, and software and generic designs of computer hardware to implement the method, are undergoing development. The method provides for automation of most of the tedious and repetitive tasks involved in editing and otherwise processing raw digitized imagery into final motion-picture products. The method was conceived to satisfy requirements, in industrial and scientific testing, for rapid processing of multiple streams of simultaneously captured raw video imagery into documentation in the form of edited video imagery and video derived data products for technical review and analysis. In the production of such video technical documentation, unlike in production of motion-picture products for entertainment, (1) it is often necessary to produce multiple video derived data products, (2) there are usually no second chances to repeat acquisition of raw imagery, (3) it is often desired to produce final products within minutes rather than hours, days, or months, and (4) consistency and quality, rather than aesthetics, are the primary criteria for judging the products. In the present method, the workflow has both serial and parallel aspects: processing can begin before all the raw imagery has been acquired, each video stream can be subjected to different stages of processing simultaneously on different computers that may be grouped into one or more cluster(s), and the final product may consist of multiple video streams. Results of processing on different computers are shared, so that workers can collaborate effectively.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.V33B3110R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.V33B3110R"><span>Monitoring system for phreatic eruptions and thermal behavior on Poás volcano hyperacidic lake, with permanent IR and HD cameras</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ramirez, C. J.; Mora-Amador, R. A., Sr.; Alpizar Segura, Y.; González, G.</p> <p>2015-12-01</p> <p>Monitoring volcanoes have been on the past decades an expanding matter, one of the rising techniques that involve new technology is the digital video surveillance, and the automated software that come within, now is possible if you have the budget and some facilities on site, to set up a real-time network of high definition video cameras, some of them even with special features like infrared, thermal, ultraviolet, etc. That can make easier or harder the analysis of volcanic phenomena like lava eruptions, phreatic eruption, plume speed, lava flows, close/open vents, just to mention some of the many application of these cameras. We present the methodology of the installation at Poás volcano of a real-time system for processing and storing HD and thermal images and video, also the process to install and acquired the HD and IR cameras, towers, solar panels and radios to transmit the data on a volcano located at the tropics, plus what volcanic areas are our goal and why. On the other hand we show the hardware and software we consider necessary to carry on our project. Finally we show some early data examples of upwelling areas on the Poás volcano hyperacidic lake and the relation with lake phreatic eruptions, also some data of increasing temperature on an old dome wall and the suddenly wall explosions, and the use of IR video for measuring plume speed and contour for use on combination with DOAS or FTIR measurements.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2001SPIE.4519..159C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2001SPIE.4519..159C"><span>Development of a web-based video management and application processing system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chan, Shermann S.; Wu, Yi; Li, Qing; Zhuang, Yueting</p> <p>2001-07-01</p> <p>How to facilitate efficient video manipulation and access in a web-based environment is becoming a popular trend for video applications. In this paper, we present a web-oriented video management and application processing system, based on our previous work on multimedia database and content-based retrieval. In particular, we extend the VideoMAP architecture with specific web-oriented mechanisms, which include: (1) Concurrency control facilities for the editing of video data among different types of users, such as Video Administrator, Video Producer, Video Editor, and Video Query Client; different users are assigned various priority levels for different operations on the database. (2) Versatile video retrieval mechanism which employs a hybrid approach by integrating a query-based (database) mechanism with content- based retrieval (CBR) functions; its specific language (CAROL/ST with CBR) supports spatio-temporal semantics of video objects, and also offers an improved mechanism to describe visual content of videos by content-based analysis method. (3) Query profiling database which records the `histories' of various clients' query activities; such profiles can be used to provide the default query template when a similar query is encountered by the same kind of users. An experimental prototype system is being developed based on the existing VideoMAP prototype system, using Java and VC++ on the PC platform.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24676389','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24676389"><span>Video surveillance captures student hand hygiene behavior, reactivity to observation, and peer influence in Kenyan primary schools.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pickering, Amy J; Blum, Annalise G; Breiman, Robert F; Ram, Pavani K; Davis, Jennifer</p> <p>2014-01-01</p> <p>In-person structured observation is considered the best approach for measuring hand hygiene behavior, yet is expensive, time consuming, and may alter behavior. Video surveillance could be a useful tool for objectively monitoring hand hygiene behavior if validated against current methods. Student hand cleaning behavior was monitored with video surveillance and in-person structured observation, both simultaneously and separately, at four primary schools in urban Kenya over a study period of 8 weeks. Video surveillance and in-person observation captured similar rates of hand cleaning (absolute difference <5%, p = 0.74). Video surveillance documented higher hand cleaning rates (71%) when at least one other person was present at the hand cleaning station, compared to when a student was alone (48%; rate ratio  = 1.14 [95% CI 1.01-1.28]). Students increased hand cleaning rates during simultaneous video and in-person monitoring as compared to single-method monitoring, suggesting reactivity to each method of monitoring. This trend was documented at schools receiving a handwashing with soap intervention, but not at schools receiving a sanitizer intervention. Video surveillance of hand hygiene behavior yields results comparable to in-person observation among schools in a resource-constrained setting. Video surveillance also has certain advantages over in-person observation, including rapid data processing and the capability to capture new behavioral insights. Peer influence can significantly improve student hand cleaning behavior and, when possible, should be exploited in the design and implementation of school hand hygiene programs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25199651','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25199651"><span>Design and develop a video conferencing framework for real-time telemedicine applications using secure group-based communication architecture.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mat Kiah, M L; Al-Bakri, S H; Zaidan, A A; Zaidan, B B; Hussain, Muzammil</p> <p>2014-10-01</p> <p>One of the applications of modern technology in telemedicine is video conferencing. An alternative to traveling to attend a conference or meeting, video conferencing is becoming increasingly popular among hospitals. By using this technology, doctors can help patients who are unable to physically visit hospitals. Video conferencing particularly benefits patients from rural areas, where good doctors are not always available. Telemedicine has proven to be a blessing to patients who have no access to the best treatment. A telemedicine system consists of customized hardware and software at two locations, namely, at the patient's and the doctor's end. In such cases, the video streams of the conferencing parties may contain highly sensitive information. Thus, real-time data security is one of the most important requirements when designing video conferencing systems. This study proposes a secure framework for video conferencing systems and a complete management solution for secure video conferencing groups. Java Media Framework Application Programming Interface classes are used to design and test the proposed secure framework. Real-time Transport Protocol over User Datagram Protocol is used to transmit the encrypted audio and video streams, and RSA and AES algorithms are used to provide the required security services. Results show that the encryption algorithm insignificantly increases the video conferencing computation time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1997SPIE.3229...79M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1997SPIE.3229...79M"><span>Video segmentation and camera motion characterization using compressed data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Milanese, Ruggero; Deguillaume, Frederic; Jacot-Descombes, Alain</p> <p>1997-10-01</p> <p>We address the problem of automatically extracting visual indexes from videos, in order to provide sophisticated access methods to the contents of a video server. We focus on tow tasks, namely the decomposition of a video clip into uniform segments, and the characterization of each shot by camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analyzing motion vectors. For the second task a least- squares fitting procedure determines the pan/tilt/zoom camera parameters. In order to guarantee the highest processing speed, all techniques process and analyze directly MPEG-1 motion vectors, without need for video decompression. Experimental results are reported for a database of news video clips.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19202495','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19202495"><span>Live lecture versus video-recorded lecture: are students voting with their feet?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cardall, Scott; Krupat, Edward; Ulrich, Michael</p> <p>2008-12-01</p> <p>In light of educators' concerns that lecture attendance in medical school has declined, the authors sought to assess students' perceptions, evaluations, and motivations concerning live lectures compared with accelerated, video-recorded lectures viewed online. The authors performed a cross-sectional survey study of all first- and second-year students at Harvard Medical School. Respondents answered questions regarding their lecture attendance; use of class and personal time; use of accelerated, video-recorded lectures; and reasons for viewing video-recorded and live lectures. Other questions asked students to compare how well live and video-recorded lectures satisfied learning goals. Of the 353 students who received questionnaires, 204 (58%) returned responses. Collectively, students indicated watching 57.2% of lectures live, 29.4% recorded, and 3.8% using both methods. All students have watched recorded lectures, and most (88.5%) have used video-accelerating technologies. When using accelerated, video-recorded lecture as opposed to attending lecture, students felt they were more likely to increase their speed of knowledge acquisition (79.3% of students), look up additional information (67.7%), stay focused (64.8%), and learn more (63.7%). Live attendance remains the predominant method for viewing lectures. However, students find accelerated, video-recorded lectures equally or more valuable. Although educators may be uncomfortable with the fundamental change in the learning process represented by video-recorded lecture use, students' responses indicate that their decisions to attend lectures or view recorded lectures are motivated primarily by a desire to satisfy their professional goals. A challenge remains for educators to incorporate technologies students find useful while creating an interactive learning culture.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009SPIE.7307E..0MS','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009SPIE.7307E..0MS"><span>Automated UAV-based mapping for airborne reconnaissance and video exploitation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Se, Stephen; Firoozfam, Pezhman; Goldstein, Norman; Wu, Linda; Dutkiewicz, Melanie; Pace, Paul; Naud, J. L. Pierre</p> <p>2009-05-01</p> <p>Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for force protection, situational awareness, mission planning, damage assessment and others. UAVs gather huge amount of video data but it is extremely labour-intensive for operators to analyse hours and hours of received data. At MDA, we have developed a suite of tools towards automated video exploitation including calibration, visualization, change detection and 3D reconstruction. The on-going work is to improve the robustness of these tools and automate the process as much as possible. Our calibration tool extracts and matches tie-points in the video frames incrementally to recover the camera calibration and poses, which are then refined by bundle adjustment. Our visualization tool stabilizes the video, expands its field-of-view and creates a geo-referenced mosaic from the video frames. It is important to identify anomalies in a scene, which may include detecting any improvised explosive devices (IED). However, it is tedious and difficult to compare video clips to look for differences manually. Our change detection tool allows the user to load two video clips taken from two passes at different times and flags any changes between them. 3D models are useful for situational awareness, as it is easier to understand the scene by visualizing it in 3D. Our 3D reconstruction tool creates calibrated photo-realistic 3D models from video clips taken from different viewpoints, using both semi-automated and automated approaches. The resulting 3D models also allow distance measurements and line-of- sight analysis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28410105','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28410105"><span>Dynamic Textures Modeling via Joint Video Dictionary Learning.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wei, Xian; Li, Yuanxiang; Shen, Hao; Chen, Fang; Kleinsteuber, Martin; Wang, Zhongfeng</p> <p>2017-04-06</p> <p>Video representation is an important and challenging task in the computer vision community. In this paper, we consider the problem of modeling and classifying video sequences of dynamic scenes which could be modeled in a dynamic textures (DT) framework. At first, we assume that image frames of a moving scene can be modeled as a Markov random process. We propose a sparse coding framework, named joint video dictionary learning (JVDL), to model a video adaptively. By treating the sparse coefficients of image frames over a learned dictionary as the underlying "states", we learn an efficient and robust linear transition matrix between two adjacent frames of sparse events in time series. Hence, a dynamic scene sequence is represented by an appropriate transition matrix associated with a dictionary. In order to ensure the stability of JVDL, we impose several constraints on such transition matrix and dictionary. The developed framework is able to capture the dynamics of a moving scene by exploring both sparse properties and the temporal correlations of consecutive video frames. Moreover, such learned JVDL parameters can be used for various DT applications, such as DT synthesis and recognition. Experimental results demonstrate the strong competitiveness of the proposed JVDL approach in comparison with state-of-the-art video representation methods. Especially, it performs significantly better in dealing with DT synthesis and recognition on heavily corrupted data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19969440','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19969440"><span>Reduction of capsule endoscopy reading times by unsupervised image mining.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Iakovidis, D K; Tsevas, S; Polydorou, A</p> <p>2010-09-01</p> <p>The screening of the small intestine has become painless and easy with wireless capsule endoscopy (WCE) that is a revolutionary, relatively non-invasive imaging technique performed by a wireless swallowable endoscopic capsule transmitting thousands of video frames per examination. The average time required for the visual inspection of a full 8-h WCE video ranges from 45 to 120min, depending on the experience of the examiner. In this paper, we propose a novel approach to WCE reading time reduction by unsupervised mining of video frames. The proposed methodology is based on a data reduction algorithm which is applied according to a novel scheme for the extraction of representative video frames from a full length WCE video. It can be used either as a video summarization or as a video bookmarking tool, providing the comparative advantage of being general, unbounded by the finiteness of a training set. The number of frames extracted is controlled by a parameter that can be tuned automatically. Comprehensive experiments on real WCE videos indicate that a significant reduction in the reading times is feasible. In the case of the WCE videos used this reduction reached 85% without any loss of abnormalities.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1987SPIE..757....8R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1987SPIE..757....8R"><span>Digital Signal Processing For Low Bit Rate TV Image Codecs</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rao, K. R.</p> <p>1987-06-01</p> <p>In view of the 56 KBPS digital switched network services and the ISDN, low bit rate codecs for providing real time full motion color video are under various stages of development. Some companies have already brought the codecs into the market. They are being used by industry and some Federal Agencies for video teleconferencing. In general, these codecs have various features such as multiplexing audio and data, high resolution graphics, encryption, error detection and correction, self diagnostics, freezeframe, split video, text overlay etc. To transmit the original color video on a 56 KBPS network requires bit rate reduction of the order of 1400:1. Such a large scale bandwidth compression can be realized only by implementing a number of sophisticated,digital signal processing techniques. This paper provides an overview of such techniques and outlines the newer concepts that are being investigated. Before resorting to the data compression techniques, various preprocessing operations such as noise filtering, composite-component transformation and horizontal and vertical blanking interval removal are to be implemented. Invariably spatio-temporal subsampling is achieved by appropriate filtering. Transform and/or prediction coupled with motion estimation and strengthened by adaptive features are some of the tools in the arsenal of the data reduction methods. Other essential blocks in the system are quantizer, bit allocation, buffer, multiplexer, channel coding etc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5982121','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5982121"><span>Analysis of Soot Propensity in Combustion Processes Using Optical Sensors and Video Magnification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Fuentes, Andrés; Reszka, Pedro; Carvajal, Gonzalo</p> <p>2018-01-01</p> <p>Industrial combustion processes are an important source of particulate matter, causing significant pollution problems that affect human health, and are a major contributor to global warming. The most common method for analyzing the soot emission propensity in flames is the Smoke Point Height (SPH) analysis, which relates the fuel flow rate to a critical flame height at which soot particles begin to leave the reactive zone through the tip of the flame. The SPH and is marked by morphological changes on the flame tip. SPH analysis is normally done through flame observations with the naked eye, leading to high bias. Other techniques are more accurate, but are not practical to implement in industrial settings, such as the Line Of Sight Attenuation (LOSA), which obtains soot volume fractions within the flame from the attenuation of a laser beam. We propose the use of Video Magnification techniques to detect the flame morphological changes and thus determine the SPH minimizing observation bias. We have applied for the first time Eulerian Video Magnification (EVM) and Phase-based Video Magnification (PVM) on an ethylene laminar diffusion flame. The results were compared with LOSA measurements, and indicate that EVM is the most accurate method for SPH determination. PMID:29751625</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=narration+AND+film&pg=2&id=EJ227061','ERIC'); return false;" href="https://eric.ed.gov/?q=narration+AND+film&pg=2&id=EJ227061"><span>Bring Your Next Film or Videotape in on Time--And within Budget.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Hampe, Barry</p> <p>1980-01-01</p> <p>Seventeen steps are presented for the successful production of training films and video tapes. The steps include concept, script preparation, budget, filming and recording, laboratory processing, editing, titles and narration, sound mix, corrections, manufacture of prints, and distribution. (CT)</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25689858','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25689858"><span>The cerebellum predicts the temporal consequences of observed motor acts.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Avanzino, Laura; Bove, Marco; Pelosin, Elisa; Ogliastro, Carla; Lagravinese, Giovanna; Martino, Davide</p> <p>2015-01-01</p> <p>It is increasingly clear that we extract patterns of temporal regularity between events to optimize information processing. The ability to extract temporal patterns and regularity of events is referred as temporal expectation. Temporal expectation activates the same cerebral network usually engaged in action selection, comprising cerebellum. However, it is unclear whether the cerebellum is directly involved in temporal expectation, when timing information is processed to make predictions on the outcome of a motor act. Healthy volunteers received one session of either active (inhibitory, 1 Hz) or sham repetitive transcranial magnetic stimulation covering the right lateral cerebellum prior the execution of a temporal expectation task. Subjects were asked to predict the end of a visually perceived human body motion (right hand handwriting) and of an inanimate object motion (a moving circle reaching a target). Videos representing movements were shown in full; the actual tasks consisted of watching the same videos, but interrupted after a variable interval from its onset by a dark interval of variable duration. During the 'dark' interval, subjects were asked to indicate when the movement represented in the video reached its end by clicking on the spacebar of the keyboard. Performance on the timing task was analyzed measuring the absolute value of timing error, the coefficient of variability and the percentage of anticipation responses. The active group exhibited greater absolute timing error compared with the sham group only in the human body motion task. Our findings suggest that the cerebellum is engaged in cognitive and perceptual domains that are strictly connected to motor control.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12596459','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12596459"><span>The relationship between violent video games, acculturation, and aggression among Latino adolescents.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Escobar-Chaves, S Liliana; Kelder, Steve; Orpinas, Pamela</p> <p>2002-12-01</p> <p>Multiple factors are involved in the occurrence of aggressive behavior. The purpose of this study was to evaluate the hypotheses that Latino middle school children exposed to higher levels of video game playing will exhibit a higher level of aggression and fighting compared to children exposed to lower levels and that the more acculturated middle school Latino children will play more video games and will prefer more violent video games compared to less acculturated middle school Latino children. This study involved 5,831 students attending eight public schools in Texas. A linear relationship was observed between the time spent playing video games and aggression scores. Higher aggression scores were significantly associated with heavier video playing for boys and girls (p < 0.0001). The more students played video games, the more they fought at school (p < 0.0001). As Latino middle school students were more acculturated, their preference for violent video game playing increased, as well as the amount of time they played video games. Students who reported speaking more Spanish at home and with their friends were less likely to spend large amounts of time playing video games and less likely to prefer violent video games (p < 0.05).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28343165','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28343165"><span>Student-Directed Video Validation of Psychomotor Skills Performance: A Strategy to Facilitate Deliberate Practice, Peer Review, and Team Skill Sets.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>DeBourgh, Gregory A; Prion, Susan K</p> <p>2017-03-22</p> <p>Background Essential nursing skills for safe practice are not limited to technical skills, but include abilities for determining salience among clinical data within dynamic practice environments, demonstrating clinical judgment and reasoning, problem-solving abilities, and teamwork competence. Effective instructional methods are needed to prepare new nurses for entry-to-practice in contemporary healthcare settings. Method This mixed-methods descriptive study explored self-reported perceptions of a process to self-record videos for psychomotor skill performance evaluation in a convenience sample of 102 pre-licensure students. Results Students reported gains in confidence and skill acquisition using team skills to record individual videos of skill performance, and described the importance of teamwork, peer support, and deliberate practice. Conclusion Although time consuming, the production of student-directed video validations of psychomotor skill performance is an authentic task with meaningful accountabilities that is well-received by students as an effective, satisfying learner experience to increase confidence and competence in performing psychomotor skills.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007SPIE.6508E..11O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007SPIE.6508E..11O"><span>Film grain noise modeling in advanced video coding</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Oh, Byung Tae; Kuo, C.-C. Jay; Sun, Shijun; Lei, Shawmin</p> <p>2007-01-01</p> <p>A new technique for film grain noise extraction, modeling and synthesis is proposed and applied to the coding of high definition video in this work. The film grain noise is viewed as a part of artistic presentation by people in the movie industry. On one hand, since the film grain noise can boost the natural appearance of pictures in high definition video, it should be preserved in high-fidelity video processing systems. On the other hand, video coding with film grain noise is expensive. It is desirable to extract film grain noise from the input video as a pre-processing step at the encoder and re-synthesize the film grain noise and add it back to the decoded video as a post-processing step at the decoder. Under this framework, the coding gain of the denoised video is higher while the quality of the final reconstructed video can still be well preserved. Following this idea, we present a method to remove film grain noise from image/video without distorting its original content. Besides, we describe a parametric model containing a small set of parameters to represent the extracted film grain noise. The proposed model generates the film grain noise that is close to the real one in terms of power spectral density and cross-channel spectral correlation. Experimental results are shown to demonstrate the efficiency of the proposed scheme.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=addiction+AND+video+AND+games&pg=3&id=ED234765','ERIC'); return false;" href="https://eric.ed.gov/?q=addiction+AND+video+AND+games&pg=3&id=ED234765"><span>Video Games: Competing with Machines.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Hanson, Jarice</p> <p></p> <p>This study was designed to compare the attitudinal and lifestyle patterns of video game players with the amount of time they play, the number of games they play, and the types of video games they play, to determine whether their personal use of time and attitude toward leisure is different when playing video games. Subjects were 200 individuals…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.afdc.energy.gov/case/1725','SCIGOVWS'); return false;" href="https://www.afdc.energy.gov/case/1725"><span>Alternative Fuels Data Center: Schwan's Home Service Delivers With</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.science.gov/aboutsearch.html">Science.gov Websites</a></p> <p></p> <p></p> <p>distribute products across the United States. For information about this project, contact Twin <em>Cities</em> Clean <em>Cities</em> Coalition. Download QuickTime Video QuickTime (.mov) Download Windows Media Video Windows Media (.wmv) Video Download Help Text version See more videos provided by Clean <em>Cities</em> TV and FuelEconomy.gov</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/279264-texas-thermal-interface-real-time-computer-interface-inframetrics-infrared-camera','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/279264-texas-thermal-interface-real-time-computer-interface-inframetrics-infrared-camera"><span>The Texas Thermal Interface: A real-time computer interface for an Inframetrics infrared camera</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Storek, D.J.; Gentle, K.W.</p> <p>1996-03-01</p> <p>The Texas Thermal Interface (TTI) offers an advantageous alternative to the conventional video path for computer analysis of infrared images from Inframetrics cameras. The TTI provides real-time computer data acquisition of 48 consecutive fields (version described here) with 8-bit pixels. The alternative requires time-consuming individual frame grabs from video tape with frequent loss of resolution in the D/A/D conversion. Within seconds after the event, the TTI temperature files may be viewed and processed to infer heat fluxes or other quantities as needed. The system cost is far less than commercial units which offer less capability. The system was developed formore » and is being used to measure heat fluxes to the plasma-facing components in a tokamak. {copyright} {ital 1996 American Institute of Physics.}« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009SPIE.7169E..1DV','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009SPIE.7169E..1DV"><span>Interactive brain shift compensation using GPU based programming</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>van der Steen, Sander; Noordmans, Herke Jan; Verdaasdonk, Rudolf</p> <p>2009-02-01</p> <p>Processing large images files or real-time video streams requires intense computational power. Driven by the gaming industry, the processing power of graphic process units (GPUs) has increased significantly. With the pixel shader model 4.0 the GPU can be used for image processing 10x faster than the CPU. Dedicated software was developed to deform 3D MR and CT image sets for real-time brain shift correction during navigated neurosurgery using landmarks or cortical surface traces defined by the navigation pointer. Feedback was given using orthogonal slices and an interactively raytraced 3D brain image. GPU based programming enables real-time processing of high definition image datasets and various applications can be developed in medicine, optics and image sciences.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15959654','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15959654"><span>Assessment of the use and feasibility of video to supplement the genetic counseling process: a cancer genetic counseling perspective.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Axilbund, J E; Hamby, L A; Thompson, D B; Olsen, S J; Griffin, C A</p> <p>2005-06-01</p> <p>Cancer genetic counselors use a variety of teaching modalities for patient education. This survey of cancer genetic counselors assessed their use of educational videos and their recommendations for content of future videos. Thirty percent of respondents use videos for patient education. Cited benefits included reinforcement of information for clients and increased counselor efficiency. Of the 70% who do not use videos, predominant barriers included the perceived lack of an appropriate video, lack of space and/or equipment, and concern that videos are impersonal. Most respondents desired a video that is representative of the genetic counseling session, but emphasized the importance of using broad information. Content considered critical included the pros and cons of genetic testing, associated psychosocial implications, and genetic discrimination. The results of this exploratory study provide data relevant for the development of a cancer genetics video for patient education, and suggestions are made based on aspects of information processing and communication theories.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10203E..0MJ','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10203E..0MJ"><span>Rapid prototyping of SoC-based real-time vision system: application to image preprocessing and face detection</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jridi, Maher; Alfalou, Ayman</p> <p>2017-05-01</p> <p>By this paper, the major goal is to investigate the Multi-CPU/FPGA SoC (System on Chip) design flow and to transfer a know-how and skills to rapidly design embedded real-time vision system. Our aim is to show how the use of these devices can be benefit for system level integration since they make possible simultaneous hardware and software development. We take the facial detection and pretreatments as case study since they have a great potential to be used in several applications such as video surveillance, building access control and criminal identification. The designed system use the Xilinx Zedboard platform. The last is the central element of the developed vision system. The video acquisition is performed using either standard webcam connected to the Zedboard via USB interface or several camera IP devices. The visualization of video content and intermediate results are possible with HDMI interface connected to HD display. The treatments embedded in the system are as follow: (i) pre-processing such as edge detection implemented in the ARM and in the reconfigurable logic, (ii) software implementation of motion detection and face detection using either ViolaJones or LBP (Local Binary Pattern), and (iii) application layer to select processing application and to display results in a web page. One uniquely interesting feature of the proposed system is that two functions have been developed to transmit data from and to the VDMA port. With the proposed optimization, the hardware implementation of the Sobel filter takes 27 ms and 76 ms for 640x480, and 720p resolutions, respectively. Hence, with the FPGA implementation, an acceleration of 5 times is obtained which allow the processing of 37 fps and 13 fps for 640x480, and 720p resolutions, respectively.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4586355','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4586355"><span>Issues and advances in research methods on video games and cognitive abilities</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Sobczyk, Bart; Dobrowolski, Paweł; Skorko, Maciek; Michalak, Jakub; Brzezicka, Aneta</p> <p>2015-01-01</p> <p>The impact of video game playing on cognitive abilities has been the focus of numerous studies over the last 10 years. Some cross-sectional comparisons indicate the cognitive advantages of video game players (VGPs) over non-players (NVGPs) and the benefits of video game trainings, while others fail to replicate these findings. Though there is an ongoing discussion over methodological practices and their impact on observable effects, some elementary issues, such as the representativeness of recruited VGP groups and lack of genre differentiation have not yet been widely addressed. In this article we present objective and declarative gameplay time data gathered from large samples in order to illustrate how playtime is distributed over VGP populations. The implications of this data are then discussed in the context of previous studies in the field. We also argue in favor of differentiating video games based on their genre when recruiting study samples, as this form of classification reflects the core mechanics that they utilize and therefore provides a measure of insight into what cognitive functions are likely to be engaged most. Additionally, we present the Covert Video Game Experience Questionnaire as an example of how this sort of classification can be applied during the recruitment process. PMID:26483717</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28866578','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28866578"><span>Bring It to the Pitch: Combining Video and Movement Data to Enhance Team Sport Analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Stein, Manuel; Janetzko, Halldor; Lamprecht, Andreas; Breitkreutz, Thorsten; Zimmermann, Philipp; Goldlucke, Bastian; Schreck, Tobias; Andrienko, Gennady; Grossniklaus, Michael; Keim, Daniel A</p> <p>2018-01-01</p> <p>Analysts in professional team sport regularly perform analysis to gain strategic and tactical insights into player and team behavior. Goals of team sport analysis regularly include identification of weaknesses of opposing teams, or assessing performance and improvement potential of a coached team. Current analysis workflows are typically based on the analysis of team videos. Also, analysts can rely on techniques from Information Visualization, to depict e.g., player or ball trajectories. However, video analysis is typically a time-consuming process, where the analyst needs to memorize and annotate scenes. In contrast, visualization typically relies on an abstract data model, often using abstract visual mappings, and is not directly linked to the observed movement context anymore. We propose a visual analytics system that tightly integrates team sport video recordings with abstract visualization of underlying trajectory data. We apply appropriate computer vision techniques to extract trajectory data from video input. Furthermore, we apply advanced trajectory and movement analysis techniques to derive relevant team sport analytic measures for region, event and player analysis in the case of soccer analysis. Our system seamlessly integrates video and visualization modalities, enabling analysts to draw on the advantages of both analysis forms. Several expert studies conducted with team sport analysts indicate the effectiveness of our integrated approach.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2002PhDT.........4T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2002PhDT.........4T"><span>Digital Video (DV): A Primer for Developing an Enterprise Video Strategy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Talovich, Thomas L.</p> <p>2002-09-01</p> <p>The purpose of this thesis is to provide an overview of digital video production and delivery. The thesis presents independent research demonstrating the educational value of incorporating video and multimedia content in training and education programs. The thesis explains the fundamental concepts associated with the process of planning, preparing, and publishing video content and assists in the development of follow-on strategies for incorporation of video content into distance training and education programs. The thesis provides an overview of the following technologies: Digital Video, Digital Video Editors, Video Compression, Streaming Video, and Optical Storage Media.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20110016313','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20110016313"><span>Portable Airborne Laser System Measures Forest-Canopy Height</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Nelson, Ross</p> <p>2005-01-01</p> <p>(PALS) is a combination of laser ranging, video imaging, positioning, and data-processing subsystems designed for measuring the heights of forest canopies along linear transects from tens to thousands of kilometers long. Unlike prior laser ranging systems designed to serve the same purpose, the PALS is not restricted to use aboard a single aircraft of a specific type: the PALS fits into two large suitcases that can be carried to any convenient location, and the PALS can be installed in almost any local aircraft for hire, thereby making it possible to sample remote forests at relatively low cost. The initial cost and the cost of repairing the PALS are also lower because the PALS hardware consists mostly of commercial off-the-shelf (COTS) units that can easily be replaced in the field. The COTS units include a laser ranging transceiver, a charge-coupled-device camera that images the laser-illuminated targets, a differential Global Positioning System (dGPS) receiver capable of operation within the Wide Area Augmentation System, a video titler, a video cassette recorder (VCR), and a laptop computer equipped with two serial ports. The VCR and computer are powered by batteries; the other units are powered at 12 VDC from the 28-VDC aircraft power system via a low-pass filter and a voltage converter. The dGPS receiver feeds location and time data, at an update rate of 0.5 Hz, to the video titler and the computer. The laser ranging transceiver, operating at a sampling rate of 2 kHz, feeds its serial range and amplitude data stream to the computer. The analog video signal from the CCD camera is fed into the video titler wherein the signal is annotated with position and time information. The titler then forwards the annotated signal to the VCR for recording on 8-mm tapes. The dGPS and laser range and amplitude serial data streams are processed by software that displays the laser trace and the dGPS information as they are fed into the computer, subsamples the laser range and amplitude data, interleaves the subsampled data with the dGPS information, and records the resulting interleaved data stream.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27615515','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27615515"><span>Video-game based exercises for older people with chronic low back pain: a protocol for a feasibility randomised controlled trial (the GAMEBACK trial).</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zadro, Joshua Robert; Shirley, Debra; Simic, Milena; Mousavi, Seyed Javad; Ceprnja, Dragana; Maka, Katherine; Ferreira, Paulo</p> <p>2017-06-01</p> <p>To investigate the feasibility of implementing a video-game exercise programme for older people with chronic low back pain (LBP). Single-centred single-blinded randomised controlled trial (RCT). Physiotherapy outpatient department in a public hospital in Western Sydney, Australia. We will recruit 60 participants over 55 years old with chronic LBP from the waiting list. Participants will be randomised to receive video-game exercise (n=30) or to remain on the waiting list (n=30) for 8 weeks, with follow up at 3 and 6 months. Participants engaging in video-game exercises will be unsupervised and will complete video-game exercise for 60minutes, 3 times per week. Participants allocated to remain on the waiting list will be encouraged to maintain their usual levels of physical activity. The primary outcomes for this feasibility study will be study processes (recruitment and response rates, adherence to and experience with the intervention, and incidence of adverse events) relevant to the future design of a large RCT. Estimates of treatment efficacy (point estimates and 95% confidence intervals) on pain self-efficacy, care seeking, physical activity, fear of movement/re-injury, pain, physical function, disability, falls-efficacy, strength, and walking speed, will be our secondary outcome measures. Recruitment for this trial began in November 2015. This study describes the rationale and processes of a feasibility study investigating a video-game exercise programme for older people with chronic LBP. Results from the feasibility study will inform on the design and sample required for a large multicentre RCT. Australian New Zealand Clinical Trials Registry: ACTRN12615000703505. Copyright © 2016 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=ants&id=EJ881175','ERIC'); return false;" href="https://eric.ed.gov/?q=ants&id=EJ881175"><span>VideoANT: Extending Online Video Annotation beyond Content Delivery</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Hosack, Bradford</p> <p>2010-01-01</p> <p>This paper expands the boundaries of video annotation in education by outlining the need for extended interaction in online video use, identifying the challenges faced by existing video annotation tools, and introducing Video-ANT, a tool designed to create text-based annotations integrated within the time line of a video hosted online. Several…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=English+AND+discussion+AND+material&pg=6&id=EJ948621','ERIC'); return false;" href="https://eric.ed.gov/?q=English+AND+discussion+AND+material&pg=6&id=EJ948621"><span>Video Recording and the Research Process</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Leung, Constant; Hawkins, Margaret R.</p> <p>2011-01-01</p> <p>This is a two-part discussion. Part 1 is entitled "English Language Learning in Subject Lessons", and Part 2 is titled "Video as a Research Tool/Counterpoint". Working with different research concerns, the authors attempt to draw attention to a set of methodological and theoretical issues that have emerged in the research process using video data.…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25772554','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25772554"><span>Action video games and improved attentional control: Disentangling selection- and response-based processes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chisholm, Joseph D; Kingstone, Alan</p> <p>2015-10-01</p> <p>Research has demonstrated that experience with action video games is associated with improvements in a host of cognitive tasks. Evidence from paradigms that assess aspects of attention has suggested that action video game players (AVGPs) possess greater control over the allocation of attentional resources than do non-video-game players (NVGPs). Using a compound search task that teased apart selection- and response-based processes (Duncan, 1985), we required participants to perform an oculomotor capture task in which they made saccades to a uniquely colored target (selection-based process) and then produced a manual directional response based on information within the target (response-based process). We replicated the finding that AVGPs are less susceptible to attentional distraction and, critically, revealed that AVGPs outperform NVGPs on both selection-based and response-based processes. These results not only are consistent with the improved-attentional-control account of AVGP benefits, but they suggest that the benefit of action video game playing extends across the full breadth of attention-mediated stimulus-response processes that impact human performance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19520145','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19520145"><span>The integration processing of the visual and auditory information in videos of real-world events: an ERP study.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Baolin; Wang, Zhongning; Jin, Zhixing</p> <p>2009-09-11</p> <p>In real life, the human brain usually receives information through visual and auditory channels and processes the multisensory information, but studies on the integration processing of the dynamic visual and auditory information are relatively few. In this paper, we have designed an experiment, where through the presentation of common scenario, real-world videos, with matched and mismatched actions (images) and sounds as stimuli, we aimed to study the integration processing of synchronized visual and auditory information in videos of real-world events in the human brain, through the use event-related potentials (ERPs) methods. Experimental results showed that videos of mismatched actions (images) and sounds would elicit a larger P400 as compared to videos of matched actions (images) and sounds. We believe that the P400 waveform might be related to the cognitive integration processing of mismatched multisensory information in the human brain. The results also indicated that synchronized multisensory information would interfere with each other, which would influence the results of the cognitive integration processing.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28205218','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28205218"><span>Identification of Mobile Phone and Analysis of Original Version of Videos through a Delay Time Analysis of Sound Signals from Mobile Phone Videos.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hwang, Min Gu; Har, Dong Hwan</p> <p>2017-11-01</p> <p>This study designs a method of identifying the camera model used to take videos that are distributed through mobile phones and determines the original version of the mobile phone video for use as legal evidence. For this analysis, an experiment was conducted to find the unique characteristics of each mobile phone. The videos recorded by mobile phones were analyzed to establish the delay time of sound signals, and the differences between the delay times of sound signals for different mobile phones were traced by classifying their characteristics. Furthermore, the sound input signals for mobile phone videos used as legal evidence were analyzed to ascertain whether they have the unique characteristics of the original version. The objective of this study was to find a method for validating the use of mobile phone videos as legal evidence using mobile phones through differences in the delay times of sound input signals. © 2017 American Academy of Forensic Sciences.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/5615873-characterization-cnrs-fizeau-wedge-laser-tuner','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/5615873-characterization-cnrs-fizeau-wedge-laser-tuner"><span>Characterization of CNRS Fizeau wedge laser tuner</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Not Available</p> <p></p> <p>A fringe detection and measurement system was constructed for use with the CNRS Fizeau wedge laser tuner, consisting of three circuit boards. The first board is a standard Reticon RC-100 B motherboard which is used to provide the timing, video processing, and housekeeping functions required by the Reticon RL-512 G photodiode array used in the system. The sampled and held video signal from the motherboard is processed by a second, custom-fabricated circuit board which contains a high-speed fringe detection and locating circuit. This board includes a dc level-discriminator-type fringe detector, a counter circuit to determine fringe center, a pulsed lasermore » triggering circuit, and a control circuit to operate the shutter for the He-Ne reference laser beam. The fringe center information is supplied to the third board, a commercial single board computer, which governs the data-collection process and interprets the results.« less</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1996SPIE.2914..170F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1996SPIE.2914..170F"><span>PCI-based WILDFIRE reconfigurable computing engines</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fross, Bradley K.; Donaldson, Robert L.; Palmer, Douglas J.</p> <p>1996-10-01</p> <p>WILDFORCE is the first PCI-based custom reconfigurable computer that is based on the Splash 2 technology transferred from the National Security Agency and the Institute for Defense Analyses, Supercomputing Research Center (SRC). The WILDFORCE architecture has many of the features of the WILDFIRE computer, such as field- programmable gate array (FPGA) based processing elements, linear array and crossbar interconnection, and high- performance memory and I/O subsystems. New features introduced in the PCI-based WILDFIRE systems include memory/processor options that can be added to any processing element. These options include static and dynamic memory, digital signal processors (DSPs), FPGAs, and microprocessors. In addition to memory/processor options, many different application specific connectors can be used to extend the I/O capabilities of the system, including systolic I/O, camera input and video display output. This paper also discusses how this new PCI-based reconfigurable computing engine is used for rapid-prototyping, real-time video processing and other DSP applications.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009LNCS.5857..627O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009LNCS.5857..627O"><span>Lifting Scheme DWT Implementation in a Wireless Vision Sensor Network</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ong, Jia Jan; Ang, L.-M.; Seng, K. P.</p> <p></p> <p>This paper presents the practical implementation of a Wireless Visual Sensor Network (WVSN) with DWT processing on the visual nodes. WVSN consists of visual nodes that capture video and transmit to the base-station without processing. Limitation of network bandwidth restrains the implementation of real time video streaming from remote visual nodes through wireless communication. Three layers of DWT filters are implemented to process the captured image from the camera. With having all the wavelet coefficients produced, it is possible just to transmit the low frequency band coefficients and obtain an approximate image at the base-station. This will reduce the amount of power required in transmission. When necessary, transmitting all the wavelet coefficients will produce the full detail of image, which is similar to the image captured at the visual nodes. The visual node combines the CMOS camera, Xilinx Spartan-3L FPGA and wireless ZigBee® network that uses the Ember EM250 chip.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19850010019','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19850010019"><span>Characterization of CNRS Fizeau wedge laser tuner</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p></p> <p>1984-01-01</p> <p>A fringe detection and measurement system was constructed for use with the CNRS Fizeau wedge laser tuner, consisting of three circuit boards. The first board is a standard Reticon RC-100 B motherboard which is used to provide the timing, video processing, and housekeeping functions required by the Reticon RL-512 G photodiode array used in the system. The sampled and held video signal from the motherboard is processed by a second, custom fabricated circuit board which contains a high speed fringe detection and locating circuit. This board includes a dc level discriminator type fringe detector, a counter circuit to determine fringe center, a pulsed laser triggering circuit, and a control circuit to operate the shutter for the He-Ne reference laser beam. The fringe center information is supplied to the third board, a commercial single board computer, which governs the data collection process and interprets the results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20090002507','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20090002507"><span>Immersive Photography Renders 360 degree Views</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p></p> <p>2008-01-01</p> <p>An SBIR contract through Langley Research Center helped Interactive Pictures Corporation, of Knoxville, Tennessee, create an innovative imaging technology. This technology is a video imaging process that allows real-time control of live video data and can provide users with interactive, panoramic 360 views. The camera system can see in multiple directions, provide up to four simultaneous views, each with its own tilt, rotation, and magnification, yet it has no moving parts, is noiseless, and can respond faster than the human eye. In addition, it eliminates the distortion caused by a fisheye lens, and provides a clear, flat view of each perspective.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15458256','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15458256"><span>[A computer method for the evaluation of Paramecium motor activity using video records of their movement].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bingi, V N; Zarutskiĭ, A A; Kapranov, S V; Kovalev, Iu M; Miliaev, V A; Tereshchenko, N A</p> <p>2004-01-01</p> <p>A method for the evaluation of Paramecium caudatum motility was proposed as a tool for the investigation of magnetobiological as well as other physical and chemical effects. The microscopically observed movement of paramecia is recorded and processed using a special software program. The protozoan motility is determined as a function of their mean velocity in a definite time. The main advantages of the method are that it is easily modified for determining various characteristics of the motor activity of paramecia and that the video data obtained can be reused.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005SPIE.5671...85D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005SPIE.5671...85D"><span>Real-time optimizations for integrated smart network camera</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Desurmont, Xavier; Lienard, Bruno; Meessen, Jerome; Delaigle, Jean-Francois</p> <p>2005-02-01</p> <p>We present an integrated real-time smart network camera. This system is composed of an image sensor, an embedded PC based electronic card for image processing and some network capabilities. The application detects events of interest in visual scenes, highlights alarms and computes statistics. The system also produces meta-data information that could be shared between other cameras in a network. We describe the requirements of such a system and then show how the design of the system is optimized to process and compress video in real-time. Indeed, typical video-surveillance algorithms as background differencing, tracking and event detection should be highly optimized and simplified to be used in this hardware. To have a good adequation between hardware and software in this light embedded system, the software management is written on top of the java based middle-ware specification established by the OSGi alliance. We can integrate easily software and hardware in complex environments thanks to the Java Real-Time specification for the virtual machine and some network and service oriented java specifications (like RMI and Jini). Finally, we will report some outcomes and typical case studies of such a camera like counter-flow detection.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.afdc.energy.gov/case/2587','SCIGOVWS'); return false;" href="https://www.afdc.energy.gov/case/2587"><span>Alternative Fuels Data Center: Maine's Only Biodiesel Manufacturer Powers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.science.gov/aboutsearch.html">Science.gov Websites</a></p> <p></p> <p></p> <p>this project, contact Maine Clean Communities. Download QuickTime <em>Video</em> QuickTime (.mov) Download Windows Media <em>Video</em> Windows Media (.wmv) <em>Video</em> Download Help Text version See more videos provided by truck Krug Energy Opens Natural Gas Fueling Station in Arkansas June <em>18</em>, 2016 photo of natural gas</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5908382','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5908382"><span>SwarmSight: Real-time Tracking of Insect Antenna Movements and Proboscis Extension Reflex Using a Common Preparation and Conventional Hardware</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Birgiolas, Justas; Jernigan, Christopher M.; Gerkin, Richard C.; Smith, Brian H.; Crook, Sharon M.</p> <p>2017-01-01</p> <p>Many scientifically and agriculturally important insects use antennae to detect the presence of volatile chemical compounds and extend their proboscis during feeding. The ability to rapidly obtain high-resolution measurements of natural antenna and proboscis movements and assess how they change in response to chemical, developmental, and genetic manipulations can aid the understanding of insect behavior. By extending our previous work on assessing aggregate insect swarm or animal group movements from natural and laboratory videos using the video analysis software SwarmSight, we developed a novel, free, and open-source software module, SwarmSight Appendage Tracking (SwarmSight.org) for frame-by-frame tracking of insect antenna and proboscis positions from conventional web camera videos using conventional computers. The software processes frames about 120 times faster than humans, performs at better than human accuracy, and, using 30 frames per second (fps) videos, can capture antennal dynamics up to 15 Hz. The software was used to track the antennal response of honey bees to two odors and found significant mean antennal retractions away from the odor source about 1 s after odor presentation. We observed antenna position density heat map cluster formation and cluster and mean angle dependence on odor concentration. PMID:29364251</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3376628','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3376628"><span>Distributed Coding/Decoding Complexity in Video Sensor Networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Cordeiro, Paulo J.; Assunção, Pedro</p> <p>2012-01-01</p> <p>Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality. PMID:22736972</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22736972','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22736972"><span>Distributed coding/decoding complexity in video sensor networks.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cordeiro, Paulo J; Assunção, Pedro</p> <p>2012-01-01</p> <p>Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014SPIE.9139E..06B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014SPIE.9139E..06B"><span>Comparative analysis of video processing and 3D rendering for cloud video games using different virtualization technologies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos</p> <p>2014-05-01</p> <p>This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015SPIE.9652E..0GB','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015SPIE.9652E..0GB"><span>Detecting abandoned objects using interacting multiple models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Becker, Stefan; Münch, David; Kieritz, Hilke; Hübner, Wolfgang; Arens, Michael</p> <p>2015-10-01</p> <p>In recent years, the wide use of video surveillance systems has caused an enormous increase in the amount of data that has to be stored, monitored, and processed. As a consequence, it is crucial to support human operators with automated surveillance applications. Towards this end an intelligent video analysis module for real-time alerting in case of abandoned objects in public spaces is proposed. The overall processing pipeline consists of two major parts. First, person motion is modeled using an Interacting Multiple Model (IMM) filter. The IMM filter estimates the state of a person according to a finite-state, discrete-time Markov chain. Second, the location of persons that stay at a fixed position defines a region of interest, in which a nonparametric background model with dynamic per-pixel state variables identifies abandoned objects. In case of a detected abandoned object, an alarm event is triggered. The effectiveness of the proposed system is evaluated on the PETS 2006 dataset and the i-Lids dataset, both reflecting prototypical surveillance scenarios.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PrOce.149..106G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PrOce.149..106G"><span>Current and future trends in marine image annotation software</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gomes-Pereira, Jose Nuno; Auger, Vincent; Beisiegel, Kolja; Benjamin, Robert; Bergmann, Melanie; Bowden, David; Buhl-Mortensen, Pal; De Leo, Fabio C.; Dionísio, Gisela; Durden, Jennifer M.; Edwards, Luke; Friedman, Ariell; Greinert, Jens; Jacobsen-Stout, Nancy; Lerner, Steve; Leslie, Murray; Nattkemper, Tim W.; Sameoto, Jessica A.; Schoening, Timm; Schouten, Ronald; Seager, James; Singh, Hanumant; Soubigou, Olivier; Tojeira, Inês; van den Beld, Inge; Dias, Frederico; Tempera, Fernando; Santos, Ricardo S.</p> <p>2016-12-01</p> <p>Given the need to describe, analyze and index large quantities of marine imagery data for exploration and monitoring activities, a range of specialized image annotation tools have been developed worldwide. Image annotation - the process of transposing objects or events represented in a video or still image to the semantic level, may involve human interactions and computer-assisted solutions. Marine image annotation software (MIAS) have enabled over 500 publications to date. We review the functioning, application trends and developments, by comparing general and advanced features of 23 different tools utilized in underwater image analysis. MIAS requiring human input are basically a graphical user interface, with a video player or image browser that recognizes a specific time code or image code, allowing to log events in a time-stamped (and/or geo-referenced) manner. MIAS differ from similar software by the capability of integrating data associated to video collection, the most simple being the position coordinates of the video recording platform. MIAS have three main characteristics: annotating events in real time, posteriorly to annotation and interact with a database. These range from simple annotation interfaces, to full onboard data management systems, with a variety of toolboxes. Advanced packages allow to input and display data from multiple sensors or multiple annotators via intranet or internet. Posterior human-mediated annotation often include tools for data display and image analysis, e.g. length, area, image segmentation, point count; and in a few cases the possibility of browsing and editing previous dive logs or to analyze the annotations. The interaction with a database allows the automatic integration of annotations from different surveys, repeated annotation and collaborative annotation of shared datasets, browsing and querying of data. Progress in the field of automated annotation is mostly in post processing, for stable platforms or still images. Integration into available MIAS is currently limited to semi-automated processes of pixel recognition through computer-vision modules that compile expert-based knowledge. Important topics aiding the choice of a specific software are outlined, the ideal software is discussed and future trends are presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3305831','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3305831"><span>Age Differences in Online Processing of Video: An Eye Movement Study</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kirkorian, Heather L.; Anderson, Daniel R.; Keen, Rachel</p> <p>2011-01-01</p> <p>Eye movements were recorded while 62 one-year-olds, four-year-olds, and adults watched television. Of interest was the extent to which viewers looked at the same place at the same time as their peers because high similarity across viewers suggests systematic viewing driven by comprehension processes. Similarity of gaze location increased with age. This was particularly true immediately following a cut to a new scene, partly because older viewers (but not infants) tended to fixate the center of the screen following a cut. Conversely, infants appear to require several seconds to orient to a new scene. Results are interpreted in the context of developing attention skills. Findings have implications for the extent to which infants comprehend and learn from commercial video. PMID:22288510</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24723577','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24723577"><span>Modeling the time--varying subjective quality of HTTP video streams with rate adaptations.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chen, Chao; Choi, Lark Kwon; de Veciana, Gustavo; Caramanis, Constantine; Heath, Robert W; Bovik, Alan C</p> <p>2014-05-01</p> <p>Newly developed hypertext transfer protocol (HTTP)-based video streaming technologies enable flexible rate-adaptation under varying channel conditions. Accurately predicting the users' quality of experience (QoE) for rate-adaptive HTTP video streams is thus critical to achieve efficiency. An important aspect of understanding and modeling QoE is predicting the up-to-the-moment subjective quality of a video as it is played, which is difficult due to hysteresis effects and nonlinearities in human behavioral responses. This paper presents a Hammerstein-Wiener model for predicting the time-varying subjective quality (TVSQ) of rate-adaptive videos. To collect data for model parameterization and validation, a database of longer duration videos with time-varying distortions was built and the TVSQs of the videos were measured in a large-scale subjective study. The proposed method is able to reliably predict the TVSQ of rate adaptive videos. Since the Hammerstein-Wiener model has a very simple structure, the proposed method is suitable for online TVSQ prediction in HTTP-based streaming.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012PhTea..50..477G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012PhTea..50..477G"><span>Dashboard Videos</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gleue, Alan D.; Depcik, Chris; Peltier, Ted</p> <p>2012-11-01</p> <p>Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his Lab Out Loud blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing this website and video, I decided to create my own dashboard videos to show to my high school physics students. I have produced and synchronized 12 separate dashboard videos, each about 10 minutes in length, driving around the city of Lawrence, KS, and Douglas County, and posted them to a website.2 Each video reflects different types of driving: both positive and negative accelerations and constant speeds. As shown in Fig. 1, I was able to capture speed, distance, and miles per gallon from my dashboard instrumentation. By linking this with a stopwatch, each of these quantities can be graphed with respect to time. I anticipate and hope that teachers will find these useful in their own classrooms, i.e., having physics students watch the videos and create their own motion maps (distance-time, speed-time) for study.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADD020370','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADD020370"><span>Noise-Riding Video Signal Threshold Generation Scheme for a Plurality of Video Signal Channels</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2007-02-12</p> <p>on the selected one signal channel to generate a new video signal threshold . The processing resource has an output to provide the new video signal threshold to the comparator circuit corresponding to the selected signal channel.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1996SPIE.2668..105K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1996SPIE.2668..105K"><span>High-performance software-only H.261 video compression on PC</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kasperovich, Leonid</p> <p>1996-03-01</p> <p>This paper describes an implementation of a software H.261 codec for PC, that takes an advantage of the fast computational algorithms for DCT-based video compression, which have been presented by the author at the February's 1995 SPIE/IS&T meeting. The motivation for developing the H.261 prototype system is to demonstrate a feasibility of real time software- only videoconferencing solution to operate across a wide range of network bandwidth, frame rate, and resolution of the input video. As the bandwidths of current network technology will be increased, the higher frame rate and resolution of video to be transmitted is allowed, that requires, in turn, a software codec to be able to compress pictures of CIF (352 X 288) resolution at up to 30 frame/sec. Running on Pentium 133 MHz PC the codec presented is capable to compress video in CIF format at 21 - 23 frame/sec. This result is comparable to the known hardware-based H.261 solutions, but it doesn't require any specific hardware. The methods to achieve high performance, the program optimization technique for Pentium microprocessor along with the performance profile, showing the actual contribution of the different encoding/decoding stages to the overall computational process, are presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22969388','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22969388"><span>Robust feedback zoom tracking for digital video surveillance.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zou, Tengyue; Tang, Xiaoqi; Song, Bao; Wang, Jin; Chen, Jihong</p> <p>2012-01-01</p> <p>Zoom tracking is an important function in video surveillance, particularly in traffic management and security monitoring. It involves keeping an object of interest in focus during the zoom operation. Zoom tracking is typically achieved by moving the zoom and focus motors in lenses following the so-called "trace curve", which shows the in-focus motor positions versus the zoom motor positions for a specific object distance. The main task of a zoom tracking approach is to accurately estimate the trace curve for the specified object. Because a proportional integral derivative (PID) controller has historically been considered to be the best controller in the absence of knowledge of the underlying process and its high-quality performance in motor control, in this paper, we propose a novel feedback zoom tracking (FZT) approach based on the geometric trace curve estimation and PID feedback controller. The performance of this approach is compared with existing zoom tracking methods in digital video surveillance. The real-time implementation results obtained on an actual digital video platform indicate that the developed FZT approach not only solves the traditional one-to-many mapping problem without pre-training but also improves the robustness for tracking moving or switching objects which is the key challenge in video surveillance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19890001789','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19890001789"><span>Observations of breakup processes of liquid jets using real-time X-ray radiography</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Char, J. M.; Kuo, K. K.; Hsieh, K. C.</p> <p>1988-01-01</p> <p>To unravel the liquid-jet breakup process in the nondilute region, a newly developed system of real-time X-ray radiography, an advanced digital image processor, and a high-speed video camera were used. Based upon recorded X-ray images, the inner structure of a liquid jet during breakup was observed. The jet divergence angle, jet breakup length, and fraction distributions along the axial and transverse directions of the liquid jets were determined in the near-injector region. Both wall- and free-jet tests were conducted to study the effect of wall friction on the jet breakup process.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5761323','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5761323"><span>Relation of Adolescent Video Game Play to Time Spent in Other Activities</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Cummings, Hope M.; Vandewater, Elizabeth A.</p> <p>2017-01-01</p> <p>Objective To examine the notion that playing video games is negatively related to the time adolescents spend in more developmentally appropriate activities. Design Nonexperimental study. Setting Survey data collected during the 2002–2003 school year. Participants A nationally representative sample of 1491 children aged 10 to 19 years. Main Outcome Measure Twenty-four–hour time-use diaries were collected on 1 weekday and 1 weekend day, both randomly chosen. Time-use diaries were used to determine adolescents’ time spent playing video games, with parents and friends, reading and doing homework, and in sports and active leisure. Results Differences in time spent between game players and nonplayers as well as the magnitude of the relationships among game time and activity time among adolescent game players were assessed. Thirty-six percent of adolescents (80% of boys and 20% of girls) played video games. On average, gamers played for an hour on the weekdays and an hour and a half on the weekends. Compared with nongamers, adolescent gamers spent 30% less time reading and 34% less time doing homework. Among gamers (both genders), time spent playing video games without parents or friends was negatively related to time spent with parents and friends in other activities. Conclusions Although gamers and nongamers did not differ in the amount of time they spent interacting with family and friends, concerns regarding gamers’ neglect of school responsibilities (reading and homework) are warranted. Although only a small percentage of girls played video games, our findings suggest that playing video games may have different social implications for girls than for boys. PMID:17606832</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=cell+AND+phone&pg=4&id=EJ947429','ERIC'); return false;" href="https://eric.ed.gov/?q=cell+AND+phone&pg=4&id=EJ947429"><span>Cell Phone Video Recording Feature as a Language Learning Tool: A Case Study</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Gromik, Nicolas A.</p> <p>2012-01-01</p> <p>This paper reports on a case study conducted at a Japanese national university. Nine participants used the video recording feature on their cell phones to produce weekly video productions. The task required that participants produce one 30-second video on a teacher-selected topic. Observations revealed the process of video creation with a cell…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA609771','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA609771"><span>Modeling Perceptual Decision Processes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2014-09-17</p> <p>Ratcliff, & Wagenmakers, in press). Previous research suggests that playing action video games improves performance on sensory, perceptual, and...estimate the contribution of several underlying psychological processes. Their analysis indicated that playing action video games leads to faster...third condition in which no video games were played at all. Behavioral data and diffusion model parameters showed similar practice effects for the</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA557275','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA557275"><span>Analysis of the IJCNN 2011 UTL Challenge</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2012-01-13</p> <p>large datasets from various application domains: handwriting recognition, image recognition, video processing, text processing, and ecology. The goal...http //clopinet.com/ul). We made available large datasets from various application domains handwriting recognition, image recognition, video...evaluation sets consist of 4096 examples each. Dataset Domain Features Sparsity Devel. Transf. AVICENNA Handwriting 120 0% 150205 50000 HARRY Video 5000 98.1</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1221971','DOE-PATENT-XML'); return false;" href="https://www.osti.gov/servlets/purl/1221971"><span>Pre-processing SAR image stream to facilitate compression for transport on bandwidth-limited-link</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Rush, Bobby G.; Riley, Robert</p> <p>2015-09-29</p> <p>Pre-processing is applied to a raw VideoSAR (or similar near-video rate) product to transform the image frame sequence into a product that resembles more closely the type of product for which conventional video codecs are designed, while sufficiently maintaining utility and visual quality of the product delivered by the codec.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20060037878&hterms=facial+expressions&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dfacial%2Bexpressions','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20060037878&hterms=facial+expressions&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dfacial%2Bexpressions"><span>(abstract) Synthesis of Speaker Facial Movements to Match Selected Speech Sequences</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Scott, Kenneth C.</p> <p>1994-01-01</p> <p>We are developing a system for synthesizing image sequences the simulate the facial motion of a speaker. To perform this synthesis, we are pursuing two major areas of effort. We are developing the necessary computer graphics technology to synthesize a realistic image sequence of a person speaking selected speech sequences. Next, we are developing a model that expresses the relation between spoken phonemes and face/mouth shape. A subject is video taped speaking an arbitrary text that contains expression of the full list of desired database phonemes. The subject is video taped from the front speaking normally, recording both audio and video detail simultaneously. Using the audio track, we identify the specific video frames on the tape relating to each spoken phoneme. From this range we digitize the video frame which represents the extreme of mouth motion/shape. Thus, we construct a database of images of face/mouth shape related to spoken phonemes. A selected audio speech sequence is recorded which is the basis for synthesizing a matching video sequence; the speaker need not be the same as used for constructing the database. The audio sequence is analyzed to determine the spoken phoneme sequence and the relative timing of the enunciation of those phonemes. Synthesizing an image sequence corresponding to the spoken phoneme sequence is accomplished using a graphics technique known as morphing. Image sequence keyframes necessary for this processing are based on the spoken phoneme sequence and timing. We have been successful in synthesizing the facial motion of a native English speaker for a small set of arbitrary speech segments. Our future work will focus on advancement of the face shape/phoneme model and independent control of facial features.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27129465','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27129465"><span>Crowdsourcing HIV Test Promotion Videos: A Noninferiority Randomized Controlled Trial in China.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tang, Weiming; Han, Larry; Best, John; Zhang, Ye; Mollan, Katie; Kim, Julie; Liu, Fengying; Hudgens, Michael; Bayus, Barry; Terris-Prestholt, Fern; Galler, Sam; Yang, Ligang; Peeling, Rosanna; Volberding, Paul; Ma, Baoli; Xu, Huifang; Yang, Bin; Huang, Shujie; Fenton, Kevin; Wei, Chongyi; Tucker, Joseph D</p> <p>2016-06-01</p> <p>Crowdsourcing, the process of shifting individual tasks to a large group, may enhance human immunodeficiency virus (HIV) testing interventions. We conducted a noninferiority, randomized controlled trial to compare first-time HIV testing rates among men who have sex with men (MSM) and transgender individuals who received a crowdsourced or a health marketing HIV test promotion video. Seven hundred twenty-one MSM and transgender participants (≥16 years old, never before tested for HIV) were recruited through 3 Chinese MSM Web portals and randomly assigned to 1 of 2 videos. The crowdsourced video was developed using an open contest and formal transparent judging while the evidence-based health marketing video was designed by experts. Study objectives were to measure HIV test uptake within 3 weeks of watching either HIV test promotion video and cost per new HIV test and diagnosis. Overall, 624 of 721 (87%) participants from 31 provinces in 217 Chinese cities completed the study. HIV test uptake was similar between the crowdsourced arm (37% [114/307]) and the health marketing arm (35% [111/317]). The estimated difference between the interventions was 2.1% (95% confidence interval, -5.4% to 9.7%). Among those tested, 31% (69/225) reported a new HIV diagnosis. The crowdsourced intervention cost substantially less than the health marketing intervention per first-time HIV test (US$131 vs US$238 per person) and per new HIV diagnosis (US$415 vs US$799 per person). Our nationwide study demonstrates that crowdsourcing may be an effective tool for improving HIV testing messaging campaigns and could increase community engagement in health campaigns. NCT02248558. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4872295','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4872295"><span>Crowdsourcing HIV Test Promotion Videos: A Noninferiority Randomized Controlled Trial in China</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Tang, Weiming; Han, Larry; Best, John; Zhang, Ye; Mollan, Katie; Kim, Julie; Liu, Fengying; Hudgens, Michael; Bayus, Barry; Terris-Prestholt, Fern; Galler, Sam; Yang, Ligang; Peeling, Rosanna; Volberding, Paul; Ma, Baoli; Xu, Huifang; Yang, Bin; Huang, Shujie; Fenton, Kevin; Wei, Chongyi; Tucker, Joseph D.</p> <p>2016-01-01</p> <p>Background. Crowdsourcing, the process of shifting individual tasks to a large group, may enhance human immunodeficiency virus (HIV) testing interventions. We conducted a noninferiority, randomized controlled trial to compare first-time HIV testing rates among men who have sex with men (MSM) and transgender individuals who received a crowdsourced or a health marketing HIV test promotion video. Methods. Seven hundred twenty-one MSM and transgender participants (≥16 years old, never before tested for HIV) were recruited through 3 Chinese MSM Web portals and randomly assigned to 1 of 2 videos. The crowdsourced video was developed using an open contest and formal transparent judging while the evidence-based health marketing video was designed by experts. Study objectives were to measure HIV test uptake within 3 weeks of watching either HIV test promotion video and cost per new HIV test and diagnosis. Results. Overall, 624 of 721 (87%) participants from 31 provinces in 217 Chinese cities completed the study. HIV test uptake was similar between the crowdsourced arm (37% [114/307]) and the health marketing arm (35% [111/317]). The estimated difference between the interventions was 2.1% (95% confidence interval, −5.4% to 9.7%). Among those tested, 31% (69/225) reported a new HIV diagnosis. The crowdsourced intervention cost substantially less than the health marketing intervention per first-time HIV test (US$131 vs US$238 per person) and per new HIV diagnosis (US$415 vs US$799 per person). Conclusions. Our nationwide study demonstrates that crowdsourcing may be an effective tool for improving HIV testing messaging campaigns and could increase community engagement in health campaigns. Clinical Trials Registration. NCT02248558. PMID:27129465</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/6427492-video-integrated-measurement-system-diagnostic-display-devices','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/6427492-video-integrated-measurement-system-diagnostic-display-devices"><span>Video integrated measurement system. [Diagnostic display devices</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Spector, B.; Eilbert, L.; Finando, S.</p> <p></p> <p>A Video Integrated Measurement (VIM) System is described which incorporates the use of various noninvasive diagnostic procedures (moire contourography, electromyography, posturometry, infrared thermography, etc.), used individually or in combination, for the evaluation of neuromusculoskeletal and other disorders and their management with biofeedback and other therapeutic procedures. The system provides for measuring individual diagnostic and therapeutic modes, or multiple modes by split screen superimposition, of real time (actual) images of the patient and idealized (ideal-normal) models on a video monitor, along with analog and digital data, graphics, color, and other transduced symbolic information. It is concluded that this system provides anmore » innovative and efficient method by which the therapist and patient can interact in biofeedback training/learning processes and holds considerable promise for more effective measurement and treatment of a wide variety of physical and behavioral disorders.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012EL.....9810004B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012EL.....9810004B"><span>Statistical modelling of subdiffusive dynamics in the cytoplasm of living cells: A FARIMA approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Burnecki, K.; Muszkieta, M.; Sikora, G.; Weron, A.</p> <p>2012-04-01</p> <p>Golding and Cox (Phys. Rev. Lett., 96 (2006) 098102) tracked the motion of individual fluorescently labelled mRNA molecules inside live E. coli cells. They found that in the set of 23 trajectories from 3 different experiments, the automatically recognized motion is subdiffusive and published an intriguing microscopy video. Here, we extract the corresponding time series from this video by image segmentation method and present its detailed statistical analysis. We find that this trajectory was not included in the data set already studied and has different statistical properties. It is best fitted by a fractional autoregressive integrated moving average (FARIMA) process with the normal-inverse Gaussian (NIG) noise and the negative memory. In contrast to earlier studies, this shows that the fractional Brownian motion is not the best model for the dynamics documented in this video.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27164589','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27164589"><span>CVD2014-A Database for Evaluating No-Reference Video Quality Assessment Algorithms.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nuutinen, Mikko; Virtanen, Toni; Vaahteranoksa, Mikko; Vuori, Tero; Oittinen, Pirkko; Hakkinen, Jukka</p> <p>2016-07-01</p> <p>In this paper, we present a new video database: CVD2014-Camera Video Database. In contrast to previous video databases, this database uses real cameras rather than introducing distortions via post-processing, which results in a complex distortion space in regard to the video acquisition process. CVD2014 contains a total of 234 videos that are recorded using 78 different cameras. Moreover, this database contains the observer-specific quality evaluation scores rather than only providing mean opinion scores. We have also collected open-ended quality descriptions that are provided by the observers. These descriptions were used to define the quality dimensions for the videos in CVD2014. The dimensions included sharpness, graininess, color balance, darkness, and jerkiness. At the end of this paper, a performance study of image and video quality algorithms for predicting the subjective video quality is reported. For this performance study, we proposed a new performance measure that accounts for observer variance. The performance study revealed that there is room for improvement regarding the video quality assessment algorithms. The CVD2014 video database has been made publicly available for the research community. All video sequences and corresponding subjective ratings can be obtained from the CVD2014 project page (http://www.helsinki.fi/psychology/groups/visualcognition/).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012SPIE.8305E..0HF','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012SPIE.8305E..0HF"><span>Video attention deviation estimation using inter-frame visual saliency map analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Feng, Yunlong; Cheung, Gene; Le Callet, Patrick; Ji, Yusheng</p> <p>2012-01-01</p> <p>A viewer's visual attention during video playback is the matching of his eye gaze movement to the changing video content over time. If the gaze movement matches the video content (e.g., follow a rolling soccer ball), then the viewer keeps his visual attention. If the gaze location moves from one video object to another, then the viewer shifts his visual attention. A video that causes a viewer to shift his attention often is a "busy" video. Determination of which video content is busy is an important practical problem; a busy video is difficult for encoder to deploy region of interest (ROI)-based bit allocation, and hard for content provider to insert additional overlays like advertisements, making the video even busier. One way to determine the busyness of video content is to conduct eye gaze experiments with a sizable group of test subjects, but this is time-consuming and costineffective. In this paper, we propose an alternative method to determine the busyness of video-formally called video attention deviation (VAD): analyze the spatial visual saliency maps of the video frames across time. We first derive transition probabilities of a Markov model for eye gaze using saliency maps of a number of consecutive frames. We then compute steady state probability of the saccade state in the model-our estimate of VAD. We demonstrate that the computed steady state probability for saccade using saliency map analysis matches that computed using actual gaze traces for a range of videos with different degrees of busyness. Further, our analysis can also be used to segment video into shorter clips of different degrees of busyness by computing the Kullback-Leibler divergence using consecutive motion compensated saliency maps.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015RGG....99...10W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015RGG....99...10W"><span>The Use of Video-Tacheometric Technology for Documenting and Analysing Geometric Features of Objects</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Woźniak, Marek; Świerczyńska, Ewa; Jastrzębski, Sławomir</p> <p>2015-12-01</p> <p>This paper analyzes selected aspects of the use of video-tacheometric technology for inventorying and documenting geometric features of objects. Data was collected with the use of the video-tacheometer Topcon Image Station IS-3 and the professional camera Canon EOS 5D Mark II. During the field work and the development of data the following experiments have been performed: multiple determination of the camera interior orientation parameters and distortion parameters of five lenses with different focal lengths, reflectorless measurements of profiles for the elevation and inventory of decorative surface wall of the building of Warsaw Ballet School. During the research the process of acquiring and integrating video-tacheometric data was analysed as well as the process of combining "point cloud" acquired by using video-tacheometer in the scanning process with independent photographs taken by a digital camera. On the basis of tests performed, utility of the use of video-tacheometric technology in geodetic surveys of geometrical features of buildings has been established.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012SPIE.8437E..0SA','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012SPIE.8437E..0SA"><span>Real-time video streaming in mobile cloud over heterogeneous wireless networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Abdallah-Saleh, Saleh; Wang, Qi; Grecos, Christos</p> <p>2012-06-01</p> <p>Recently, the concept of Mobile Cloud Computing (MCC) has been proposed to offload the resource requirements in computational capabilities, storage and security from mobile devices into the cloud. Internet video applications such as real-time streaming are expected to be ubiquitously deployed and supported over the cloud for mobile users, who typically encounter a range of wireless networks of diverse radio access technologies during their roaming. However, real-time video streaming for mobile cloud users across heterogeneous wireless networks presents multiple challenges. The network-layer quality of service (QoS) provision to support high-quality mobile video delivery in this demanding scenario remains an open research question, and this in turn affects the application-level visual quality and impedes mobile users' perceived quality of experience (QoE). In this paper, we devise a framework to support real-time video streaming in this new mobile video networking paradigm and evaluate the performance of the proposed framework empirically through a lab-based yet realistic testing platform. One particular issue we focus on is the effect of users' mobility on the QoS of video streaming over the cloud. We design and implement a hybrid platform comprising of a test-bed and an emulator, on which our concept of mobile cloud computing, video streaming and heterogeneous wireless networks are implemented and integrated to allow the testing of our framework. As representative heterogeneous wireless networks, the popular WLAN (Wi-Fi) and MAN (WiMAX) networks are incorporated in order to evaluate effects of handovers between these different radio access technologies. The H.264/AVC (Advanced Video Coding) standard is employed for real-time video streaming from a server to mobile users (client nodes) in the networks. Mobility support is introduced to enable continuous streaming experience for a mobile user across the heterogeneous wireless network. Real-time video stream packets are captured for analytical purposes on the mobile user node. Experimental results are obtained and analysed. Future work is identified towards further improvement of the current design and implementation. With this new mobile video networking concept and paradigm implemented and evaluated, results and observations obtained from this study would form the basis of a more in-depth, comprehensive understanding of various challenges and opportunities in supporting high-quality real-time video streaming in mobile cloud over heterogeneous wireless networks.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009SPIE.7257E..17C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009SPIE.7257E..17C"><span>Enhancement tuning and control for high dynamic range images in multi-scale locally adaptive contrast enhancement algorithms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cvetkovic, Sascha D.; Schirris, Johan; de With, Peter H. N.</p> <p>2009-01-01</p> <p>For real-time imaging in surveillance applications, visibility of details is of primary importance to ensure customer confidence. If we display High Dynamic-Range (HDR) scenes whose contrast spans four or more orders of magnitude on a conventional monitor without additional processing, results are unacceptable. Compression of the dynamic range is therefore a compulsory part of any high-end video processing chain because standard monitors are inherently Low- Dynamic Range (LDR) devices with maximally two orders of display dynamic range. In real-time camera processing, many complex scenes are improved with local contrast enhancements, bringing details to the best possible visibility. In this paper, we show how a multi-scale high-frequency enhancement scheme, in which gain is a non-linear function of the detail energy, can be used for the dynamic range compression of HDR real-time video camera signals. We also show the connection of our enhancement scheme to the processing way of the Human Visual System (HVS). Our algorithm simultaneously controls perceived sharpness, ringing ("halo") artifacts (contrast) and noise, resulting in a good balance between visibility of details and non-disturbance of artifacts. The overall quality enhancement, suitable for both HDR and LDR scenes, is based on a careful selection of the filter types for the multi-band decomposition and a detailed analysis of the signal per frequency band.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21527869','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21527869"><span>Assessing instrument handling and operative consequences simultaneously: a simple method for creating synced multicamera videos for endosurgical or microsurgical skills assessments.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jabbour, Noel; Sidman, James</p> <p>2011-10-01</p> <p>There has been an increasing interest in assessment of technical skills in most medical and surgical disciplines. Many of these assessments involve microscopy or endoscopy and are thus amenable to video recording for post hoc review. An ideal skills assessment video would provide the reviewer with a simultaneous view of the examinee's instrument handling and the operative field. Ideally, a reviewer should be blinded to the identity of the examinee and whether the assessment was performed as a pretest or posttest examination, when given in conjunction with an educational intervention. We describe a simple method for reliably creating deidentified, multicamera, time-synced videos, which may be used in technical skills assessments. We pilot tested this method in a pediatric airway endoscopy Objective Assessment of Technical Skills (OSATS). Total video length was compared with the OSATS administration time. Thirty-nine OSATS were administered. There were no errors encountered in time-syncing the videos using this method. Mean duration of OSATS videos was 11 minutes and 20 seconds, which was significantly less than the time needed for an expert to be present at the administration of each 30-minute OSATS (P < 0.001). The described method for creating time-synced, multicamera skills assessment videos is reliable and may be used in endosurgical or microsurgical skills assessments. Compared with live review, post hoc video review using this method can save valuable expert reviewer time. Most importantly, this method allows a reviewer to simultaneously evaluate an examinee's instrument handling and the operative field while being blinded to the examinee's identity and timing of examination administration.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.afdc.energy.gov/case/2484','SCIGOVWS'); return false;" href="https://www.afdc.energy.gov/case/2484"><span>Alternative Fuels Data Center: America's Largest Home Runs on Biodiesel in</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.science.gov/aboutsearch.html">Science.gov Websites</a></p> <p></p> <p></p> <p>Coalition (Western North Carolina). Download QuickTime <em>Video</em> QuickTime (.mov) Download Windows Media <em>Video</em> Windows Media (.wmv) <em>Video</em> Download Help Text version See more videos provided by Clean Cities TV and Photo of a car Hydrogen Powers Fuel Cell Vehicles in California Nov. <em>18</em>, 2017 Photo of a car Smart Car</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.afdc.energy.gov/case/1863','SCIGOVWS'); return false;" href="https://www.afdc.energy.gov/case/1863"><span>Alternative Fuels Data Center: Rhode Island EV Initiative Adds Chargers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.science.gov/aboutsearch.html">Science.gov Websites</a></p> <p></p> <p></p> <p>Ocean State Clean Cities. Download QuickTime <em>Video</em> QuickTime (.mov) Download Windows Media <em>Video</em> Windows Media (.wmv) <em>Video</em> Download Help Text version See more videos provided by Clean Cities TV and Photo of a car Hydrogen Powers Fuel Cell Vehicles in California Nov. <em>18</em>, 2017 Photo of a car Smart Car</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.afdc.energy.gov/case/2584','SCIGOVWS'); return false;" href="https://www.afdc.energy.gov/case/2584"><span>Alternative Fuels Data Center: Worcester Regional Transit Authority Drives</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.science.gov/aboutsearch.html">Science.gov Websites</a></p> <p></p> <p></p> <p>Clean Cities. Download QuickTime <em>Video</em> QuickTime (.mov) Download Windows Media <em>Video</em> Windows Media (.wmv ) <em>Video</em> Download Help Text version See more videos provided by Clean Cities TV and FuelEconomy.gov Fuel Cell Vehicles in California Nov. <em>18</em>, 2017 Photo of a car Smart Car Shopping Nov. 4, 2017 Image of</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.afdc.energy.gov/case/323','SCIGOVWS'); return false;" href="https://www.afdc.energy.gov/case/323"><span>Alternative Fuels Data Center: Propane Powers Airport Shuttles in New</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.science.gov/aboutsearch.html">Science.gov Websites</a></p> <p></p> <p></p> <p>Clean Fuel Partnership. Download QuickTime <em>Video</em> QuickTime (.mov) Download Windows Media <em>Video</em> Windows Media (.wmv) <em>Video</em> Download Help Text version See more videos provided by Clean Cities TV and Vehicles in California Nov. <em>18</em>, 2017 Photo of a car Smart Car Shopping Nov. 4, 2017 Photo of a truck</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010SPIE.7623E..4DH','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010SPIE.7623E..4DH"><span>Multilevel wireless capsule endoscopy video segmentation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hwang, Sae; Celebi, M. Emre</p> <p>2010-03-01</p> <p>Wireless Capsule Endoscopy (WCE) is a relatively new technology (FDA approved in 2002) allowing doctors to view most of the small intestine. WCE transmits more than 50,000 video frames per examination and the visual inspection of the resulting video is a highly time-consuming task even for the experienced gastroenterologist. Typically, a medical clinician spends one or two hours to analyze a WCE video. To reduce the assessment time, it is critical to develop a technique to automatically discriminate digestive organs and shots each of which consists of the same or similar shots. In this paper a multi-level WCE video segmentation methodology is presented to reduce the examination time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24249825','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24249825"><span>Media use and sleep among boys with autism spectrum disorder, ADHD, or typical development.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Engelhardt, Christopher R; Mazurek, Micah O; Sohl, Kristin</p> <p>2013-12-01</p> <p>The current study examined the relationships between media use (television, computer, and video games) and sleep among boys with autism spectrum disorder (ASD) compared with those with attention-deficit/hyperactivity disorder (ADHD) or with typical development (TD). Participants included parents of boys with ASD (n = 49), ADHD (n = 38), or TD (n = 41) (ages 8-17 years). Questionnaires assessed daily hours of media use, bedroom access to media, and average sleep hours per night. Bedroom media access was associated with less time spent sleeping per night, irrespective of diagnostic group. Bedroom access to a television or a computer was more strongly associated with reduced sleep among boys with ASD compared with boys with ADHD or TD. Multivariate models showed that, in addition to bedroom access, the amount of time spent playing video games was uniquely associated with less sleep among boys with ASD. In the ASD group only, the relationship between bedroom access to video games and reduced sleep was mediated by hours of video game play. The current results suggest that media-related variables may be an important consideration in understanding sleep disturbances in children with ASD. Further research is needed to better characterize the processes by which media use may affect sleep among individuals with ASD. Overall, the current findings suggest that screen-based media time and bedroom media access should be routinely assessed and may be important intervention targets when addressing sleep problems in children with ASD.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29923660','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29923660"><span>Real-time strategy video game experience and structural connectivity - A diffusion tensor imaging study.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kowalczyk, Natalia; Shi, Feng; Magnuski, Mikolaj; Skorko, Maciek; Dobrowolski, Pawel; Kossowski, Bartosz; Marchewka, Artur; Bielecki, Maksymilian; Kossut, Malgorzata; Brzezicka, Aneta</p> <p>2018-06-20</p> <p>Experienced video game players exhibit superior performance in visuospatial cognition when compared to non-players. However, very little is known about the relation between video game experience and structural brain plasticity. To address this issue, a direct comparison of the white matter brain structure in RTS (real time strategy) video game players (VGPs) and non-players (NVGPs) was performed. We hypothesized that RTS experience can enhance connectivity within and between occipital and parietal regions, as these regions are likely to be involved in the spatial and visual abilities that are trained while playing RTS games. The possible influence of long-term RTS game play experience on brain structural connections was investigated using diffusion tensor imaging (DTI) and a region of interest (ROI) approach in order to describe the experience-related plasticity of white matter. Our results revealed significantly more total white matter connections between occipital and parietal areas and within occipital areas in RTS players compared to NVGPs. Additionally, the RTS group had an altered topological organization of their structural network, expressed in local efficiency within the occipito-parietal subnetwork. Furthermore, the positive association between network metrics and time spent playing RTS games suggests a close relationship between extensive, long-term RTS game play and neuroplastic changes. These results indicate that long-term and extensive RTS game experience induces alterations along axons that link structures of the occipito-parietal loop involved in spatial and visual processing. © 2018 Wiley Periodicals, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9704E..0JK','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9704E..0JK"><span>Real-time quantum cascade laser-based infrared microspectroscopy in-vivo</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kröger-Lui, N.; Haase, K.; Pucci, A.; Schönhals, A.; Petrich, W.</p> <p>2016-03-01</p> <p>Infrared microscopy can be performed to observe dynamic processes on a microscopic scale. Fourier-transform infrared spectroscopy-based microscopes are bound to limitations regarding time resolution, which hampers their potential for imaging fast moving systems. In this manuscript we present a quantum cascade laser-based infrared microscope which overcomes these limitations and readily achieves standard video frame rates. The capabilities of our setup are demonstrated by observing dynamical processes at their specific time scales: fermentation, slow moving Amoeba Proteus and fast moving Caenorhabditis elegans. Mid-infrared sampling rates between 30 min and 20 ms are demonstrated.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22242256','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22242256"><span>[Violent video games and aggression: long-term impact and selection effects].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Staude-Müller, Frithjof</p> <p>2011-01-01</p> <p>This study applied social-cognitive models of aggression in order to examine relations between video game use and aggressive tendencies and biases in social information processing. To this end, 499 secondary school students (aged 12-16) completed a survey on two occasions one year apart. Hierarchical regression analysis probed media effects and selection effects and included relevant contextual variables (parental monitoring of media consumption, impulsivity, and victimization). Results revealed that it was not the consumption of violent video games but rather an uncontrolled pattern of video game use that was associated with increasing aggressive tendencies. This increase was partly mediated by a hostile attribution bias in social information processing. The influence of aggressive tendencies on later video game consumption was also examined (selection path). Adolescents with aggressive traits intensified their video game behavior only in terms of their uncontrolled video game use. This was found even after controlling for sensation seeking and parental media control.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Animation+AND+reading&pg=5&id=EJ729612','ERIC'); return false;" href="https://eric.ed.gov/?q=Animation+AND+reading&pg=5&id=EJ729612"><span>Computer-Based Reading Instruction for Young Children with Disabilities</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Lee, Yeunjoo; Vail, Cynthia O.</p> <p>2005-01-01</p> <p>This investigation examined the effectiveness of a computer program in teaching sight word recognition to four young children with developmental disabilities. The intervention program was developed through a formative evaluation process. It embedded a constant-time-delay procedure and involved sounds, video, text, and animations. Dependent…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3968003','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3968003"><span>Video Surveillance Captures Student Hand Hygiene Behavior, Reactivity to Observation, and Peer Influence in Kenyan Primary Schools</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Pickering, Amy J.; Blum, Annalise G.; Breiman, Robert F.; Ram, Pavani K.; Davis, Jennifer</p> <p>2014-01-01</p> <p>Background In-person structured observation is considered the best approach for measuring hand hygiene behavior, yet is expensive, time consuming, and may alter behavior. Video surveillance could be a useful tool for objectively monitoring hand hygiene behavior if validated against current methods. Methods Student hand cleaning behavior was monitored with video surveillance and in-person structured observation, both simultaneously and separately, at four primary schools in urban Kenya over a study period of 8 weeks. Findings Video surveillance and in-person observation captured similar rates of hand cleaning (absolute difference <5%, p = 0.74). Video surveillance documented higher hand cleaning rates (71%) when at least one other person was present at the hand cleaning station, compared to when a student was alone (48%; rate ratio  = 1.14 [95% CI 1.01–1.28]). Students increased hand cleaning rates during simultaneous video and in-person monitoring as compared to single-method monitoring, suggesting reactivity to each method of monitoring. This trend was documented at schools receiving a handwashing with soap intervention, but not at schools receiving a sanitizer intervention. Conclusion Video surveillance of hand hygiene behavior yields results comparable to in-person observation among schools in a resource-constrained setting. Video surveillance also has certain advantages over in-person observation, including rapid data processing and the capability to capture new behavioral insights. Peer influence can significantly improve student hand cleaning behavior and, when possible, should be exploited in the design and implementation of school hand hygiene programs. PMID:24676389</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23098632','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23098632"><span>Augmenting communication and decision making in the intensive care unit with a cardiopulmonary resuscitation video decision support tool: a temporal intervention study.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>McCannon, Jessica B; O'Donnell, Walter J; Thompson, B Taylor; El-Jawahri, Areej; Chang, Yuchiao; Ananian, Lillian; Bajwa, Ednan K; Currier, Paul F; Parikh, Mihir; Temel, Jennifer S; Cooper, Zara; Wiener, Renda Soylemez; Volandes, Angelo E</p> <p>2012-12-01</p> <p>Effective communication between intensive care unit (ICU) providers and families is crucial given the complexity of decisions made regarding goals of therapy. Using video images to supplement medical discussions is an innovative process to standardize and improve communication. In this six-month, quasi-experimental, pre-post intervention study we investigated the impact of a cardiopulmonary resuscitation (CPR) video decision support tool upon knowledge about CPR among surrogate decision makers for critically ill adults. We interviewed surrogate decision makers for patients aged 50 and over, using a structured questionnaire that included a four-question CPR knowledge assessment similar to those used in previous studies. Surrogates in the post-intervention arm viewed a three-minute video decision support tool about CPR before completing the knowledge assessment and completed questions about perceived value of the video. We recruited 23 surrogates during the first three months (pre-intervention arm) and 27 surrogates during the latter three months of the study (post-intervention arm). Surrogates viewing the video had more knowledge about CPR (p=0.008); average scores were 2.0 (SD 1.1) and 2.9 (SD 1.2) (out of a total of 4) in pre-intervention and post-intervention arms. Surrogates who viewed the video were comfortable with its content (81% very) and 81% would recommend the video. CPR preferences for patients at the time of ICU discharge/death were distributed as follows: pre-intervention: full code 78%, DNR 22%; post-intervention: full code 59%, DNR 41% (p=0.23).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/6013577','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/6013577"><span>Overview of the DART project</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Berry, K.R.; Hansen, F.R.; Napolitano, L.M.</p> <p>1992-01-01</p> <p>DART (DSP Arrary for Reconfigurable Tasks) is a parallel architecture of two high-performance SDP (digital signal processing) chips with the flexibility to handle a wide range of real-time applications. Each of the 32-bit floating-point DSP processes in DART is programmable in a high-level languate ( C'' or Ada). We have added extensions to the real-time operating system used by DART in order to support parallel processor. The combination of high-level language programmability, a real-time operating system, and parallel processing support significantly reduces the development cost of application software for signal processing and control applications. We have demonstrated this capability bymore » using DART to reconstruct images in the prototype VIP (Video Imaging Projectile) groundstation.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/10117343-overview-dart-project','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/10117343-overview-dart-project"><span>Overview of the DART project</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Berry, K.R.; Hansen, F.R.; Napolitano, L.M.</p> <p>1992-01-01</p> <p>DART (DSP Arrary for Reconfigurable Tasks) is a parallel architecture of two high-performance SDP (digital signal processing) chips with the flexibility to handle a wide range of real-time applications. Each of the 32-bit floating-point DSP processes in DART is programmable in a high-level languate (``C`` or Ada). We have added extensions to the real-time operating system used by DART in order to support parallel processor. The combination of high-level language programmability, a real-time operating system, and parallel processing support significantly reduces the development cost of application software for signal processing and control applications. We have demonstrated this capability by usingmore » DART to reconstruct images in the prototype VIP (Video Imaging Projectile) groundstation.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2004SPIE.5563..118H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2004SPIE.5563..118H"><span>A customizable commercial miniaturized 320×256 indium gallium arsenide shortwave infrared camera</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Huang, Shih-Che; O'Grady, Matthew; Groppe, Joseph V.; Ettenberg, Martin H.; Brubaker, Robert M.</p> <p>2004-10-01</p> <p>The design and performance of a commercial short-wave-infrared (SWIR) InGaAs microcamera engine is presented. The 0.9-to-1.7 micron SWIR imaging system consists of a room-temperature-TEC-stabilized, 320x256 (25 μm pitch) InGaAs focal plane array (FPA) and a high-performance, highly customizable image-processing set of electronics. The detectivity, D*, of the system is greater than 1013 cm-√Hz/W at 1.55 μm, and this sensitivity may be adjusted in real-time over 100 dB. It features snapshot-mode integration with a minimum exposure time of 130 μs. The digital video processor provides real time pixel-to-pixel, 2-point dark-current subtraction and non-uniformity compensation along with defective-pixel substitution. Other features include automatic gain control (AGC), gamma correction, 7 preset configurations, adjustable exposure time, external triggering, and windowing. The windowing feature is highly flexible; the region of interest (ROI) may be placed anywhere on the imager and can be varied at will. Windowing allows for high-speed readout enabling such applications as target acquisition and tracking; for example, a 32x32 ROI window may be read out at over 3500 frames per second (fps). Output video is provided as EIA170-compatible analog, or as 12-bit CameraLink-compatible digital. All the above features are accomplished in a small volume < 28 cm3, weight < 70 g, and with low power consumption < 1.3 W at room temperature using this new microcamera engine. Video processing is based on a field-programmable gate array (FPGA) platform with a soft-embedded processor that allows for ease of integration/addition of customer-specific algorithms, processes, or design requirements. The camera was developed with the high-performance, space-restricted, power-conscious application in mind, such as robotic or UAV deployment.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29457600','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29457600"><span>Public online information about tinnitus: A cross-sectional study of YouTube videos.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Basch, Corey H; Yin, Jingjing; Kollia, Betty; Adedokun, Adeyemi; Trusty, Stephanie; Yeboah, Felicia; Fung, Isaac Chun-Hai</p> <p>2018-01-01</p> <p>To examine the information about tinnitus contained in different video sources on YouTube. The 100 most widely viewed tinnitus videos were manually coded. Firstly, we identified the sources of upload: consumer, professional, television-based clip, and internet-based clip. Secondly, the videos were analyzed to ascertain what pertinent information they contained from a current National Institute on Deafness and Other Communication Disorders fact sheet. Of the videos, 42 were consumer-generated, 33 from media, and 25 from professionals. Collectively, the 100 videos were viewed almost 9 million times. The odds of mentioning "objective tinnitus" in professional videos were 9.58 times those from media sources [odds ratio (OR) = 9.58; 95% confidence interval (CI): 1.94, 47.42; P = 0.01], whereas these odds in consumer videos were 51% of media-generated videos (OR = 0.51; 95% CI: 0.20, 1.29; P = 0.16). The odds that the purpose of a video was to sell a product or service were nearly the same for both consumer and professional videos. Consumer videos were found to be 4.33 times as likely to carry a theme about an individual's own experience with tinnitus (OR = 4.33; 95% CI: 1.62, 11.63; P = 0.004) as media videos. Of the top 100 viewed videos on tinnitus, most were uploaded by consumers, sharing individuals' experiences. Actions are needed to make scientific medical information more prominently available and accessible on YouTube and other social media.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5843984','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5843984"><span>Public Online Information About Tinnitus: A Cross-Sectional Study of YouTube Videos</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Basch, Corey H.; Yin, Jingjing; Kollia, Betty; Adedokun, Adeyemi; Trusty, Stephanie; Yeboah, Felicia; Fung, Isaac Chun-Hai</p> <p>2018-01-01</p> <p>Purpose: To examine the information about tinnitus contained in different video sources on YouTube. Materials and Methods: The 100 most widely viewed tinnitus videos were manually coded. Firstly, we identified the sources of upload: consumer, professional, television-based clip, and internet-based clip. Secondly, the videos were analyzed to ascertain what pertinent information they contained from a current National Institute on Deafness and Other Communication Disorders fact sheet. Results: Of the videos, 42 were consumer-generated, 33 from media, and 25 from professionals. Collectively, the 100 videos were viewed almost 9 million times. The odds of mentioning “objective tinnitus” in professional videos were 9.58 times those from media sources [odds ratio (OR) = 9.58; 95% confidence interval (CI): 1.94, 47.42; P = 0.01], whereas these odds in consumer videos were 51% of media-generated videos (OR = 0.51; 95% CI: 0.20, 1.29; P = 0.16). The odds that the purpose of a video was to sell a product or service were nearly the same for both consumer and professional videos. Consumer videos were found to be 4.33 times as likely to carry a theme about an individual’s own experience with tinnitus (OR = 4.33; 95% CI: 1.62, 11.63; P = 0.004) as media videos. Conclusions: Of the top 100 viewed videos on tinnitus, most were uploaded by consumers, sharing individuals’ experiences. Actions are needed to make scientific medical information more prominently available and accessible on YouTube and other social media. PMID:29457600</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..226a2082A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..226a2082A"><span>The Effect of Normalization in Violence Video Classification Performance</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ali, Ashikin; Senan, Norhalina</p> <p>2017-08-01</p> <p>Basically, data pre-processing is an important part of data mining. Normalization is a pre-processing stage for any type of problem statement, especially in video classification. Challenging problems that arises in video classification is because of the heterogeneous content, large variations in video quality and complex semantic meanings of the concepts involved. Therefore, to regularize this problem, it is thoughtful to ensure normalization or basically involvement of thorough pre-processing stage aids the robustness of classification performance. This process is to scale all the numeric variables into certain range to make it more meaningful for further phases in available data mining techniques. Thus, this paper attempts to examine the effect of 2 normalization techniques namely Min-max normalization and Z-score in violence video classifications towards the performance of classification rate using Multi-layer perceptron (MLP) classifier. Using Min-Max Normalization range of [0,1] the result shows almost 98% of accuracy, meanwhile Min-Max Normalization range of [-1,1] accuracy is 59% and for Z-score the accuracy is 50%.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS1005a2045H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS1005a2045H"><span>Data Processing of LAPAN-A3 Thermal Imager</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hartono, R.; Hakim, P. R.; Syafrudin, AH</p> <p>2018-04-01</p> <p>As an experimental microsatellite, LAPAN-A3/IPB satellite has an experimental thermal imager, which is called as micro-bolometer, to observe earth surface temperature for horizon observation. The imager data is transmitted from satellite to ground station by S-band video analog signal transmission, and then processed by ground station to become sequence of 8-bit enhanced and contrasted images. Data processing of LAPAN-A3/IPB thermal imager is more difficult than visual digital camera, especially for mosaic and classification purpose. This research aims to describe simple mosaic and classification process of LAPAN-A3/IPB thermal imager based on several videos data produced by the imager. The results show that stitching using Adobe Photoshop produces excellent result but can only process small area, while manual approach using ImageJ software can produce a good result but need a lot of works and time consuming. The mosaic process using image cross-correlation by Matlab offers alternative solution, which can process significantly bigger area in significantly shorter time processing. However, the quality produced is not as good as mosaic images of the other two methods. The simple classifying process that has been done shows that the thermal image can classify three distinct objects, i.e.: clouds, sea, and land surface. However, the algorithm fail to classify any other object which might be caused by distortions in the images. All of these results can be used as reference for development of thermal imager in LAPAN-A4 satellite.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19140648','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19140648"><span>Can training in a real-time strategy video game attenuate cognitive decline in older adults?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Basak, Chandramallika; Boot, Walter R; Voss, Michelle W; Kramer, Arthur F</p> <p>2008-12-01</p> <p>Declines in various cognitive abilities, particularly executive control functions, are observed in older adults. An important goal of cognitive training is to slow or reverse these age-related declines. However, opinion is divided in the literature regarding whether cognitive training can engender transfer to a variety of cognitive skills in older adults. In the current study, the authors trained older adults in a real-time strategy video game for 23.5 hr in an effort to improve their executive functions. A battery of cognitive tasks, including tasks of executive control and visuospatial skills, were assessed before, during, and after video-game training. The trainees improved significantly in the measures of game performance. They also improved significantly more than the control participants in executive control functions, such as task switching, working memory, visual short-term memory, and reasoning. Individual differences in changes in game performance were correlated with improvements in task switching. The study has implications for the enhancement of executive control processes of older adults. Copyright (c) 2009 APA, all rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2858025','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2858025"><span>Significantly improved precision of cell migration analysis in time-lapse video microscopy through use of a fully automated tracking system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2010-01-01</p> <p>Background Cell motility is a critical parameter in many physiological as well as pathophysiological processes. In time-lapse video microscopy, manual cell tracking remains the most common method of analyzing migratory behavior of cell populations. In addition to being labor-intensive, this method is susceptible to user-dependent errors regarding the selection of "representative" subsets of cells and manual determination of precise cell positions. Results We have quantitatively analyzed these error sources, demonstrating that manual cell tracking of pancreatic cancer cells lead to mis-calculation of migration rates of up to 410%. In order to provide for objective measurements of cell migration rates, we have employed multi-target tracking technologies commonly used in radar applications to develop fully automated cell identification and tracking system suitable for high throughput screening of video sequences of unstained living cells. Conclusion We demonstrate that our automatic multi target tracking system identifies cell objects, follows individual cells and computes migration rates with high precision, clearly outperforming manual procedures. PMID:20377897</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25228916','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25228916"><span>Development and assessment of an evidence-based prostate cancer intervention programme for black men: the W.O.R.D. on prostate cancer video.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Odedina, Folakemi; Oluwayemisi, Awoyemi O; Pressey, Shannon; Gaddy, Samuel; Egensteiner, Eva; Ojewale, Ezekiel O; Moline, Olivia Myra; Martin, Chloe Marie</p> <p>2014-01-01</p> <p>In spite of the numerous prostate cancer (CaP) intervention programmes that have been implemented to address the disparities experienced by black men, CaP prevention, risk reduction, and early detection behaviours remain low among black men. The lack of formal theoretical frameworks to guide the development and implementation of interventions has been recognised as one of the primary reasons for the failure of health interventions. Members of the Florida Prostate Cancer Health Disparity (CaPHD) group employed the Personal Model of Prostate Cancer Disparity (PIPCaD) model and the Health Communication Process Model to plan, implement, and evaluate an intervention programme, the 'Working through Outreach to Reduce Disparity (W.O.R.D. on Prostate Cancer)' video for black men. The location for the video was in a barbershop, a popular setting for the targeted group. The video starred CaP survivors, CaP advocates, a radio personality, and barbers. In addition, remarks were provided by a CaP scientist, a urologist, a CaP advocate, a former legislator, and a minister. The W.O.R.D. video was developed to assist black men in meeting the Healthy People 2020 goal for the United States of America. The efficacy of the W.O.R.D. video was successfully established among 143 black men in Florida. Exposure to the video was found to statistically increase CaP knowledge and intention to participate in CaP screening. Furthermore, exposure to the video statistically decreased participants' perception of the number of factors contributing to decision, uncertainty about CaP screening. Participants were highly satisfied with the video content and rated the quality of the video to be very good. Participants also rated the video as credible, informative, useful, relevant, understandable, not too time consuming, clear, and interesting.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4154942','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4154942"><span>Development and assessment of an evidence-based prostate cancer intervention programme for black men: the W.O.R.D. on prostate cancer video</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Odedina, Folakemi; Oluwayemisi, Awoyemi O; Pressey, Shannon; Gaddy, Samuel; Egensteiner, Eva; Ojewale, Ezekiel O; Moline, Olivia Myra; Martin, Chloe Marie</p> <p>2014-01-01</p> <p>In spite of the numerous prostate cancer (CaP) intervention programmes that have been implemented to address the disparities experienced by black men, CaP prevention, risk reduction, and early detection behaviours remain low among black men. The lack of formal theoretical frameworks to guide the development and implementation of interventions has been recognised as one of the primary reasons for the failure of health interventions. Members of the Florida Prostate Cancer Health Disparity (CaPHD) group employed the Personal Model of Prostate Cancer Disparity (PIPCaD) model and the Health Communication Process Model to plan, implement, and evaluate an intervention programme, the ‘Working through Outreach to Reduce Disparity (W.O.R.D. on Prostate Cancer)’ video for black men. The location for the video was in a barbershop, a popular setting for the targeted group. The video starred CaP survivors, CaP advocates, a radio personality, and barbers. In addition, remarks were provided by a CaP scientist, a urologist, a CaP advocate, a former legislator, and a minister. The W.O.R.D. video was developed to assist black men in meeting the Healthy People 2020 goal for the United States of America. The efficacy of the W.O.R.D. video was successfully established among 143 black men in Florida. Exposure to the video was found to statistically increase CaP knowledge and intention to participate in CaP screening. Furthermore, exposure to the video statistically decreased participants’ perception of the number of factors contributing to decision, uncertainty about CaP screening. Participants were highly satisfied with the video content and rated the quality of the video to be very good. Participants also rated the video as credible, informative, useful, relevant, understandable, not too time consuming, clear, and interesting. PMID:25228916</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=evaluative&pg=5&id=EJ1178183','ERIC'); return false;" href="https://eric.ed.gov/?q=evaluative&pg=5&id=EJ1178183"><span>Evidence-Based Scripted Videos on Handling Student Misbehavior: The Development and Evaluation of Video Cases for Teacher Education</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Piwowar, Valentina; Barth, Victoria L.; Ophardt, Diemut; Thiel, Felicitas</p> <p>2018-01-01</p> <p>Scripted videos are based on a screenplay and are a viable and widely used tool for learning. Yet, reservations exist due to limited authenticity and high production costs. The present paper comprehensively describes a video production process for scripted videos on the topic of student misbehavior in the classroom. In a three step…</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=children+AND+learn+AND+movies&id=EJ758235','ERIC'); return false;" href="https://eric.ed.gov/?q=children+AND+learn+AND+movies&id=EJ758235"><span>Timing Is Everything: One Teacher's Exploration of the Best Time to Use Visual Media in a Science Unit</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Drury, Debra</p> <p>2006-01-01</p> <p>Kids today are growing up with televisions, movies, videos and DVDs, so it's logical to assume that this type of media could be motivating and used to great effect in the classroom. But at what point should film and other visual media be used? Are there times in the inquiry process when showing a film or incorporating other visual media is more…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9786E..1UR','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9786E..1UR"><span>HPC enabled real-time remote processing of laparoscopic surgery</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ronaghi, Zahra; Sapra, Karan; Izard, Ryan; Duffy, Edward; Smith, Melissa C.; Wang, Kuang-Ching; Kwartowitz, David M.</p> <p>2016-03-01</p> <p>Laparoscopic surgery is a minimally invasive surgical technique. The benefit of small incisions has a disadvantage of limited visualization of subsurface tissues. Image-guided surgery (IGS) uses pre-operative and intra-operative images to map subsurface structures. One particular laparoscopic system is the daVinci-si robotic surgical system. The video streams generate approximately 360 megabytes of data per second. Real-time processing this large stream of data on a bedside PC, single or dual node setup, has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. We have implement and compared performance of compression, segmentation and registration algorithms on Clemson's Palmetto supercomputer using dual NVIDIA K40 GPUs per node. Our computing framework will also enable reliability using replication of computation. We will securely transfer the files to remote HPC clusters utilizing an OpenFlow-based network service, Steroid OpenFlow Service (SOS) that can increase performance of large data transfers over long-distance and high bandwidth networks. As a result, utilizing high-speed OpenFlow- based network to access computing clusters with GPUs will improve surgical procedures by providing real-time medical image processing and laparoscopic data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFM.S13D4516K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFM.S13D4516K"><span>Development of High-speed Visualization System of Hypocenter Data Using CUDA-based GPU computing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kumagai, T.; Okubo, K.; Uchida, N.; Matsuzawa, T.; Kawada, N.; Takeuchi, N.</p> <p>2014-12-01</p> <p>After the Great East Japan Earthquake on March 11, 2011, intelligent visualization of seismic information is becoming important to understand the earthquake phenomena. On the other hand, to date, the quantity of seismic data becomes enormous as a progress of high accuracy observation network; we need to treat many parameters (e.g., positional information, origin time, magnitude, etc.) to efficiently display the seismic information. Therefore, high-speed processing of data and image information is necessary to handle enormous amounts of seismic data. Recently, GPU (Graphic Processing Unit) is used as an acceleration tool for data processing and calculation in various study fields. This movement is called GPGPU (General Purpose computing on GPUs). In the last few years the performance of GPU keeps on improving rapidly. GPU computing gives us the high-performance computing environment at a lower cost than before. Moreover, use of GPU has an advantage of visualization of processed data, because GPU is originally architecture for graphics processing. In the GPU computing, the processed data is always stored in the video memory. Therefore, we can directly write drawing information to the VRAM on the video card by combining CUDA and the graphics API. In this study, we employ CUDA and OpenGL and/or DirectX to realize full-GPU implementation. This method makes it possible to write drawing information to the VRAM on the video card without PCIe bus data transfer: It enables the high-speed processing of seismic data. The present study examines the GPU computing-based high-speed visualization and the feasibility for high-speed visualization system of hypocenter data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2013-title47-vol3/pdf/CFR-2013-title47-vol3-sec64-617.pdf','CFR2013'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2013-title47-vol3/pdf/CFR-2013-title47-vol3-sec64-617.pdf"><span>47 CFR 64.617 - Neutral Video Communication Service Platform.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2013&page.go=Go">Code of Federal Regulations, 2013 CFR</a></p> <p></p> <p>2013-10-01</p> <p>... 47 Telecommunication 3 2013-10-01 2013-10-01 false Neutral Video Communication Service Platform... Related Customer Premises Equipment for Persons With Disabilities § 64.617 Neutral Video Communication... Neutral Video Communication Service Platform to process VRS calls. Each VRS CA service provider shall be...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2014-title47-vol3/pdf/CFR-2014-title47-vol3-sec64-617.pdf','CFR2014'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2014-title47-vol3/pdf/CFR-2014-title47-vol3-sec64-617.pdf"><span>47 CFR 64.617 - Neutral Video Communication Service Platform.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2014&page.go=Go">Code of Federal Regulations, 2014 CFR</a></p> <p></p> <p>2014-10-01</p> <p>... 47 Telecommunication 3 2014-10-01 2014-10-01 false Neutral Video Communication Service Platform... Related Customer Premises Equipment for Persons With Disabilities § 64.617 Neutral Video Communication... Neutral Video Communication Service Platform to process VRS calls. Each VRS CA service provider shall be...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED571414.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED571414.pdf"><span>A Video Game for Learning Brain Evolution: A Resource or a Strategy?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Barbosa Gomez, Luisa Fernanda; Bohorquez Sotelo, Maria Cristina; Roja Higuera, Naydu Shirley; Rodriguez Mendoza, Brigitte Julieth</p> <p>2016-01-01</p> <p>Learning resources are part of the educational process of students. However, how video games act as learning resources in a population that has not selected the virtual formation as their main methodology? The aim of this study was to identify the influence of a video game in the learning process of brain evolution. For this purpose, the opinions…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24933517','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24933517"><span>Action video games do not improve the speed of information processing in simple perceptual tasks.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>van Ravenzwaaij, Don; Boekel, Wouter; Forstmann, Birte U; Ratcliff, Roger; Wagenmakers, Eric-Jan</p> <p>2014-10-01</p> <p>Previous research suggests that playing action video games improves performance on sensory, perceptual, and attentional tasks. For instance, Green, Pouget, and Bavelier (2010) used the diffusion model to decompose data from a motion detection task and estimate the contribution of several underlying psychological processes. Their analysis indicated that playing action video games leads to faster information processing, reduced response caution, and no difference in motor responding. Because perceptual learning is generally thought to be highly context-specific, this transfer from gaming is surprising and warrants corroborative evidence from a large-scale training study. We conducted 2 experiments in which participants practiced either an action video game or a cognitive game in 5 separate, supervised sessions. Prior to each session and following the last session, participants performed a perceptual discrimination task. In the second experiment, we included a third condition in which no video games were played at all. Behavioral data and diffusion model parameters showed similar practice effects for the action gamers, the cognitive gamers, and the nongamers and suggest that, in contrast to earlier reports, playing action video games does not improve the speed of information processing in simple perceptual tasks.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4447196','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4447196"><span>Action Video Games Do Not Improve the Speed of Information Processing in Simple Perceptual Tasks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>van Ravenzwaaij, Don; Boekel, Wouter; Forstmann, Birte U.; Ratcliff, Roger; Wagenmakers, Eric-Jan</p> <p>2015-01-01</p> <p>Previous research suggests that playing action video games improves performance on sensory, perceptual, and attentional tasks. For instance, Green, Pouget, and Bavelier (2010) used the diffusion model to decompose data from a motion detection task and estimate the contribution of several underlying psychological processes. Their analysis indicated that playing action video games leads to faster information processing, reduced response caution, and no difference in motor responding. Because perceptual learning is generally thought to be highly context-specific, this transfer from gaming is surprising and warrants corroborative evidence from a large-scale training study. We conducted 2 experiments in which participants practiced either an action video game or a cognitive game in 5 separate, supervised sessions. Prior to each session and following the last session, participants performed a perceptual discrimination task. In the second experiment, we included a third condition in which no video games were played at all. Behavioral data and diffusion model parameters showed similar practice effects for the action gamers, the cognitive gamers, and the nongamers and suggest that, in contrast to earlier reports, playing action video games does not improve the speed of information processing in simple perceptual tasks. PMID:24933517</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10204E..08S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10204E..08S"><span>Photo-acoustic and video-acoustic methods for sensing distant sound sources</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Slater, Dan; Kozacik, Stephen; Kelmelis, Eric</p> <p>2017-05-01</p> <p>Long range telescopic video imagery of distant terrestrial scenes, aircraft, rockets and other aerospace vehicles can be a powerful observational tool. But what about the associated acoustic activity? A new technology, Remote Acoustic Sensing (RAS), may provide a method to remotely listen to the acoustic activity near these distant objects. Local acoustic activity sometimes weakly modulates the ambient illumination in a way that can be remotely sensed. RAS is a new type of microphone that separates an acoustic transducer into two spatially separated components: 1) a naturally formed in situ acousto-optic modulator (AOM) located within the distant scene and 2) a remote sensing readout device that recovers the distant audio. These two elements are passively coupled over long distances at the speed of light by naturally occurring ambient light energy or other electromagnetic fields. Stereophonic, multichannel and acoustic beam forming are all possible using RAS techniques and when combined with high-definition video imagery it can help to provide a more cinema like immersive viewing experience. A practical implementation of a remote acousto-optic readout device can be a challenging engineering problem. The acoustic influence on the optical signal is generally weak and often with a strong bias term. The optical signal is further degraded by atmospheric seeing turbulence. In this paper, we consider two fundamentally different optical readout approaches: 1) a low pixel count photodiode based RAS photoreceiver and 2) audio extraction directly from a video stream. Most of our RAS experiments to date have used the first method for reasons of performance and simplicity. But there are potential advantages to extracting audio directly from a video stream. These advantages include the straight forward ability to work with multiple AOMs (useful for acoustic beam forming), simpler optical configurations, and a potential ability to use certain preexisting video recordings. However, doing so requires overcoming significant limitations typically including much lower sample rates, reduced sensitivity and dynamic range, more expensive video hardware, and the need for sophisticated video processing. The ATCOM real time image processing software environment provides many of the needed capabilities for researching video-acoustic signal extraction. ATCOM currently is a powerful tool for the visual enhancement of atmospheric turbulence distorted telescopic views. In order to explore the potential of acoustic signal recovery from video imagery we modified ATCOM to extract audio waveforms from the same telescopic video sources. In this paper, we demonstrate and compare both readout techniques for several aerospace test scenarios to better show where each has advantages.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003SPIE.5286..962Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003SPIE.5286..962Y"><span>Content-based TV sports video retrieval using multimodal analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yu, Yiqing; Liu, Huayong; Wang, Hongbin; Zhou, Dongru</p> <p>2003-09-01</p> <p>In this paper, we propose content-based video retrieval, which is a kind of retrieval by its semantical contents. Because video data is composed of multimodal information streams such as video, auditory and textual streams, we describe a strategy of using multimodal analysis for automatic parsing sports video. The paper first defines the basic structure of sports video database system, and then introduces a new approach that integrates visual stream analysis, speech recognition, speech signal processing and text extraction to realize video retrieval. The experimental results for TV sports video of football games indicate that the multimodal analysis is effective for video retrieval by quickly browsing tree-like video clips or inputting keywords within predefined domain.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28109951','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28109951"><span>A cognitive approach for design of a multimedia informed consent video and website in pediatric research.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Antal, Holly; Bunnell, H Timothy; McCahan, Suzanne M; Pennington, Chris; Wysocki, Tim; Blake, Kathryn V</p> <p>2017-02-01</p> <p>Poor participant comprehension of research procedures following the conventional face-to-face consent process for biomedical research is common. We describe the development of a multimedia informed consent video and website that incorporates cognitive strategies to enhance comprehension of study related material directed to parents and adolescents. A multidisciplinary team was assembled for development of the video and website that included human subjects professionals; psychologist researchers; institutional video and web developers; bioinformaticians and programmers; and parent and adolescent stakeholders. Five learning strategies that included Sensory-Modality view, Coherence, Signaling, Redundancy, and Personalization were integrated into a 15-min video and website material that describes a clinical research trial. A diverse team collaborated extensively over 15months to design and build a multimedia platform for obtaining parental permission and adolescent assent for participant in as asthma clinical trial. Examples of the learning principles included, having a narrator describe what was being viewed on the video (sensory-modality); eliminating unnecessary text and graphics (coherence); having the initial portion of the video explain the sections of the video to be viewed (signaling); avoiding simultaneous presentation of text and graphics (redundancy); and having a consistent narrator throughout the video (personalization). Existing conventional and multimedia processes for obtaining research informed consent have not actively incorporated basic principles of human cognition and learning in the design and implementation of these processes. The present paper illustrates how this can be achieved, setting the stage for rigorous evaluation of potential benefits such as improved comprehension, satisfaction with the consent process, and completion of research objectives. New consent strategies that have an integrated cognitive approach need to be developed and tested in controlled trials. Copyright © 2017 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=webcam&pg=2&id=EJ964045','ERIC'); return false;" href="https://eric.ed.gov/?q=webcam&pg=2&id=EJ964045"><span>Webcam Stories</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Clidas, Jeanne</p> <p>2011-01-01</p> <p>Stories, steeped in science content and full of specific information, can be brought into schools and homes through the power of live video streaming. Video streaming refers to the process of viewing video over the internet. These videos may be live (webcam feeds) or recorded. These stories are engaging and inspiring. They offer opportunities to…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2001PhDT........88T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2001PhDT........88T"><span>How do video-based demonstration assessment tasks affect problem-solving process, test anxiety, chemistry anxiety and achievement in general chemistry students?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Terrell, Rosalind Stephanie</p> <p>2001-12-01</p> <p>Because paper-and-pencil testing provides limited knowledge about what students know about chemical phenomena, we have developed video-based demonstrations to broaden measurement of student learning. For example, students might be shown a video demonstrating equilibrium shifts. Two methods for viewing equilibrium shifts are changing the concentration of the reactants and changing the temperature of the system. The students are required to combine the data collected from the video and their knowledge of chemistry to determine which way the equilibrium shifts. Video-based demonstrations are important techniques for measuring student learning because they require students to apply conceptual knowledge learned in class to a specific chemical problem. This study explores how video-based demonstration assessment tasks affect problem-solving processes, test anxiety, chemistry anxiety and achievement in general chemistry students. Several instruments were used to determine students' knowledge about chemistry, students' test and chemistry anxiety before and after treatment. Think-aloud interviews were conducted to determine students' problem-solving processes after treatment. The treatment group was compared to a control group and a group watching video demonstrations. After treatment students' anxiety increased and achievement decreased. There were also no significant differences found in students' problem-solving processes following treatment. These negative findings may be attributed to several factors that will be explored in this study.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=design+AND+dam&pg=2&id=EJ478056','ERIC'); return false;" href="https://eric.ed.gov/?q=design+AND+dam&pg=2&id=EJ478056"><span>Redesigning Schools for 21st Century Technologies: A Middle School with the Power to Improve.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Van Dam, Janet M.</p> <p>1994-01-01</p> <p>Describes the processes involved in redesigning and renovating Power Middle School (Michigan) for current and future educational technology, particularly for the media center. Topics discussed include planning; time management; wiring infrastructure; voice and video networks; teacher and student multimedia production rooms; and communications…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=oceanography&pg=5&id=ED553494','ERIC'); return false;" href="https://eric.ed.gov/?q=oceanography&pg=5&id=ED553494"><span>Collaborative Estimation in Distributed Sensor Networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Kar, Swarnendu</p> <p>2013-01-01</p> <p>Networks of smart ultra-portable devices are already indispensable in our lives, augmenting our senses and connecting our lives through real time processing and communication of sensory (e.g., audio, video, location) inputs. Though usually hidden from the user's sight, the engineering of these devices involves fierce tradeoffs between energy…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008AGUFMED31A0602M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008AGUFMED31A0602M"><span>Illustrating Geology With Customized Video in Introductory Geoscience Courses</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Magloughlin, J. F.</p> <p>2008-12-01</p> <p>For the past several years, I have been creating short videos for use in large-enrollment introductory physical geology classes. The motivation for this project included 1) lack of appropriate depth in existing videos, 2) engagement of non-science students, 3) student indifference to traditional textbooks, 4) a desire to share the visual splendor of geology through virtual field trips, and 5) a desire to meld photography, animation, narration, and videography in self-contained experiences. These (HD) videos are information-intensive but short, allowing a focus on relatively narrow topics from numerous subdisciplines, incorporation into lectures to help create variety while minimally interrupting flow and holding students' attention, and manageable file sizes. Nearly all involve one or more field locations, including sites throughout the western and central continental U.S., as well as Hawaii, Italy, New Zealand, and Scotland. The limited scope of the project and motivations mentioned preclude a comprehensive treatment of geology. Instead, videos address geologic processes, locations, features, and interactions with humans. The videos have been made available via DVD and on-line streaming. Such a project requires an array of video and audio equipment and software, a broad knowledge of geology, very good computing power, adequate time, creativity, a substantial travel budget, liability insurance, elucidation of the separation (or non-separation) between such a project and other responsibilities, and, preferably but not essentially, the support of one's supervisor or academic unit. Involving students in such projects entails risks, but involving necessary technical expertise is virtually unavoidable. In my own courses, some videos are used in class and/or made available on-line as simply another aspect of the educational experience. Student response has been overwhelmingly positive, particularly when expectations of students regarding the content of the videos is made clear, and appropriate materials accompany the videos. Retention of primary concepts presented within videos is at least as high as ordinary lecture material, and student questions reference the videos more than any other matter. Use of the videos has created more variety in the course, a better connection to real world geology, and a more palatable experience for students who increasingly describe themselves as visual learners.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26291847','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26291847"><span>Psychological Outcomes After a Sexual Assault Video Intervention: A Randomized Trial.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Miller, Katherine E; Cranston, Christopher C; Davis, Joanne L; Newman, Elana; Resnick, Heidi</p> <p>2015-01-01</p> <p>Sexual assault survivors are at risk for a number of mental and physical health problems, including posttraumatic stress disorder and anxiety. Unfortunately, few seek physical or mental health services after a sexual assault (Price, Davidson, Ruggiero, Acierno, & Resnick, 2014). Mitigating the impact of sexual assault via early interventions is a growing and important area of research. This study adds to this literature by replicating and expanding previous studies (e.g., Resnick, Acierno, Amstadter, Self-Brown, & Kilpatrick, 2007) examining the efficacy of a brief video-based intervention that provides psychoeducation and modeling of coping strategies to survivors at the time of a sexual assault nurse examination. Female sexual assault survivors receiving forensic examinations were randomized to standard care or to the video intervention condition (N = 164). The participants completed mental health assessments 2 weeks (n = 69) and 2 months (n = 74) after the examination. Analyses of covariance revealed that women in the video condition had significantly fewer anxiety symptoms at the follow-up assessments. In addition, of those participants in the video condition, survivors reporting no previous sexual assault history reported significantly fewer posttraumatic stress symptoms 2 weeks after the examination than those with a prior assault history. Forensic nurses have the unique opportunity to intervene immediately after a sexual assault. This brief video intervention is a cost-effective tool to aid with that process.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMIN32B..07M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMIN32B..07M"><span>Augmented Virtuality: A Real-time Process for Presenting Real-world Visual Sensory Information in an Immersive Virtual Environment for Planetary Exploration</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>McFadden, D.; Tavakkoli, A.; Regenbrecht, J.; Wilson, B.</p> <p>2017-12-01</p> <p>Virtual Reality (VR) and Augmented Reality (AR) applications have recently seen an impressive growth, thanks to the advent of commercial Head Mounted Displays (HMDs). This new visualization era has opened the possibility of presenting researchers from multiple disciplines with data visualization techniques not possible via traditional 2D screens. In a purely VR environment researchers are presented with the visual data in a virtual environment, whereas in a purely AR application, a piece of virtual object is projected into the real world with which researchers could interact. There are several limitations to the purely VR or AR application when taken within the context of remote planetary exploration. For example, in a purely VR environment, contents of the planet surface (e.g. rocks, terrain, or other features) should be created off-line from a multitude of images using image processing techniques to generate 3D mesh data that will populate the virtual surface of the planet. This process usually takes a tremendous amount of computational resources and cannot be delivered in real-time. As an alternative, video frames may be superimposed on the virtual environment to save processing time. However, such rendered video frames will lack 3D visual information -i.e. depth information. In this paper, we present a technique to utilize a remotely situated robot's stereoscopic cameras to provide a live visual feed from the real world into the virtual environment in which planetary scientists are immersed. Moreover, the proposed technique will blend the virtual environment with the real world in such a way as to preserve both the depth and visual information from the real world while allowing for the sensation of immersion when the entire sequence is viewed via an HMD such as Oculus Rift. The figure shows the virtual environment with an overlay of the real-world stereoscopic video being presented in real-time into the virtual environment. Notice the preservation of the object's shape, shadows, and depth information. The distortions shown in the image are due to the rendering of the stereoscopic data into a 2D image for the purposes of taking screenshots.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Streaming+AND+Media&pg=6&id=EJ550871','ERIC'); return false;" href="https://eric.ed.gov/?q=Streaming+AND+Media&pg=6&id=EJ550871"><span>Industrial-Strength Streaming Video.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Avgerakis, George; Waring, Becky</p> <p>1997-01-01</p> <p>Corporate training, financial services, entertainment, and education are among the top applications for streaming video servers, which send video to the desktop without downloading the whole file to the hard disk, saving time and eliminating copyrights questions. Examines streaming video technology, lists ten tips for better net video, and ranks…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=video+AND+gaming+AND+addiction&id=EJ870075','ERIC'); return false;" href="https://eric.ed.gov/?q=video+AND+gaming+AND+addiction&id=EJ870075"><span>Video Game Structural Characteristics: A New Psychological Taxonomy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>King, Daniel; Delfabbro, Paul; Griffiths, Mark</p> <p>2010-01-01</p> <p>Excessive video game playing behaviour may be influenced by a variety of factors including the structural characteristics of video games. Structural characteristics refer to those features inherent within the video game itself that may facilitate initiation, development and maintenance of video game playing over time. Numerous structural…</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29265426','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29265426"><span>YouTube as a source of information on skin bleaching: a content analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Basch, C H; Brown, A A; Fullwood, M D; Clark, A; Fung, I C-H; Yin, J</p> <p>2018-06-01</p> <p>Skin bleaching is a common, yet potentially harmful body modification practice. To describe the characteristics of the most widely viewed YouTube™ videos related to skin bleaching. The search term 'skin bleaching' was used to identify the 100 most popular English-language YouTube videos relating to the topic. Both descriptive and specific information were noted. Among the 100 manually coded skin-bleaching YouTube videos in English, there were 21 consumer-created videos, 45 internet-based news videos, 30 television news videos and 4 professional videos. Excluding the 4 professional videos, we limited our content categorization and regression analysis to 96 videos. Approximately 93% (89/96) of the most widely viewed videos mentioned changing how you look and 74% (71/96) focused on bleaching the whole body. Of the 96 videos, 63 (66%) of videos showed/mentioned a transformation. Only about 14% (13/96) mentioned that skin bleaching is unsafe. The likelihood of a video selling a skin bleaching product was 17 times higher in internet videos compared with consumer videos (OR = 17.00, 95% CI 4.58-63.09, P < 0.001). Consumer-generated videos were about seven times more likely to mention making bleaching products at home compared with internet-based news videos (OR = 6.86, 95% CI 1.77-26.59, P < 0.01). The most viewed YouTube video on skin bleaching was uploaded by an internet source. Videos made by television sources mentioned more information about skin bleaching being unsafe, while consumer-generated videos focused more on making skin-bleaching products at home. © 2017 British Association of Dermatologists.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1991SPIE.1606..878H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1991SPIE.1606..878H"><span>Real-time video signal processing by generalized DDA and control memories: three-dimensional rotation and mapping</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hama, Hiromitsu; Yamashita, Kazumi</p> <p>1991-11-01</p> <p>A new method for video signal processing is described in this paper. The purpose is real-time image transformations at low cost, low power, and small size hardware. This is impossible without special hardware. Here generalized digital differential analyzer (DDA) and control memory (CM) play a very important role. Then indentation, which is called jaggy, is caused on the boundary of a background and a foreground accompanied with the processing. Jaggy does not occur inside the transformed image because of adopting linear interpretation. But it does occur inherently on the boundary of the background and the transformed images. It causes deterioration of image quality, and must be avoided. There are two well-know ways to improve image quality, blurring and supersampling. The former does not have much effect, and the latter has the much higher cost of computing. As a means of settling such a trouble, a method is proposed, which searches for positions that may arise jaggy and smooths such points. Computer simulations based on the real data from VTR, one scene of a movie, are presented to demonstrate our proposed scheme using DDA and CMs and to confirm the effectiveness on various transformations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2012-title47-vol4/pdf/CFR-2012-title47-vol4-sec79-3.pdf','CFR2012'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2012-title47-vol4/pdf/CFR-2012-title47-vol4-sec79-3.pdf"><span>47 CFR 79.3 - Video description of video programming.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2012&page.go=Go">Code of Federal Regulations, 2012 CFR</a></p> <p></p> <p>2012-10-01</p> <p>... programming distributor. (8) Children's Programming. Television programming directed at children 16 years of... provide 50 hours of video description per calendar quarter, either during prime time or on children's... calendar quarter, either during prime time or on children's programming, on each programming stream on...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2013-title47-vol4/pdf/CFR-2013-title47-vol4-sec79-3.pdf','CFR2013'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2013-title47-vol4/pdf/CFR-2013-title47-vol4-sec79-3.pdf"><span>47 CFR 79.3 - Video description of video programming.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2013&page.go=Go">Code of Federal Regulations, 2013 CFR</a></p> <p></p> <p>2013-10-01</p> <p>... programming distributor. (8) Children's Programming. Television programming directed at children 16 years of... provide 50 hours of video description per calendar quarter, either during prime time or on children's... calendar quarter, either during prime time or on children's programming, on each programming stream on...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010SPIE.7623E..3TK','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010SPIE.7623E..3TK"><span>Automated method for tracing leading and trailing processes of migrating neurons in confocal image sequences</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kerekes, Ryan A.; Gleason, Shaun S.; Trivedi, Niraj; Solecki, David J.</p> <p>2010-03-01</p> <p>Segmentation, tracking, and tracing of neurons in video imagery are important steps in many neuronal migration studies and can be inaccurate and time-consuming when performed manually. In this paper, we present an automated method for tracing the leading and trailing processes of migrating neurons in time-lapse image stacks acquired with a confocal fluorescence microscope. In our approach, we first locate and track the soma of the cell of interest by smoothing each frame and tracking the local maxima through the sequence. We then trace the leading process in each frame by starting at the center of the soma and stepping repeatedly in the most likely direction of the leading process. This direction is found at each step by examining second derivatives of fluorescent intensity along curves of constant radius around the current point. Tracing terminates after a fixed number of steps or when fluorescent intensity drops below a fixed threshold. We evolve the resulting trace to form an improved trace that more closely follows the approximate centerline of the leading process. We apply a similar algorithm to the trailing process of the cell by starting the trace in the opposite direction. We demonstrate our algorithm on two time-lapse confocal video sequences of migrating cerebellar granule neurons (CGNs). We show that the automated traces closely approximate ground truth traces to within 1 or 2 pixels on average. Additionally, we compute line intensity profiles of fluorescence along the automated traces and quantitatively demonstrate their similarity to manually generated profiles in terms of fluorescence peak locations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006SciEd..90..579Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006SciEd..90..579Z"><span>Expert models and modeling processes associated with a computer-modeling tool</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Baohui; Liu, Xiufeng; Krajcik, Joseph S.</p> <p>2006-07-01</p> <p>Holding the premise that the development of expertise is a continuous process, this study concerns expert models and modeling processes associated with a modeling tool called Model-It. Five advanced Ph.D. students in environmental engineering and public health used Model-It to create and test models of water quality. Using think aloud technique and video recording, we captured their computer screen modeling activities and thinking processes. We also interviewed them the day following their modeling sessions to further probe the rationale of their modeling practices. We analyzed both the audio-video transcripts and the experts' models. We found the experts' modeling processes followed the linear sequence built in the modeling program with few instances of moving back and forth. They specified their goals up front and spent a long time thinking through an entire model before acting. They specified relationships with accurate and convincing evidence. Factors (i.e., variables) in expert models were clustered, and represented by specialized technical terms. Based on the above findings, we made suggestions for improving model-based science teaching and learning using Model-It.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26579055','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26579055"><span>Differential effects of wakeful rest, music and video game playing on working memory performance in the n-back task.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kuschpel, Maxim S; Liu, Shuyan; Schad, Daniel J; Heinzel, Stephan; Heinz, Andreas; Rapp, Michael A</p> <p>2015-01-01</p> <p>The interruption of learning processes by breaks filled with diverse activities is common in everyday life. We investigated the effects of active computer gaming and passive relaxation (rest and music) breaks on working memory performance. Young adults were exposed to breaks involving (i) eyes-open resting, (ii) listening to music and (iii) playing the video game "Angry Birds" before performing the n-back working memory task. Based on linear mixed-effects modeling, we found that playing the "Angry Birds" video game during a short learning break led to a decline in task performance over the course of the task as compared to eyes-open resting and listening to music, although overall task performance was not impaired. This effect was associated with high levels of daily mind wandering and low self-reported ability to concentrate. These findings indicate that video games can negatively affect working memory performance over time when played in between learning tasks. We suggest further investigation of these effects because of their relevance to everyday activity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012OptEn..51j7002S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012OptEn..51j7002S"><span>Depth estimation of features in video frames with improved feature matching technique using Kinect sensor</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sharma, Kajal; Moon, Inkyu; Kim, Sung Gaun</p> <p>2012-10-01</p> <p>Estimating depth has long been a major issue in the field of computer vision and robotics. The Kinect sensor's active sensing strategy provides high-frame-rate depth maps and can recognize user gestures and human pose. This paper presents a technique to estimate the depth of features extracted from video frames, along with an improved feature-matching method. In this paper, we used the Kinect camera developed by Microsoft, which captured color and depth images for further processing. Feature detection and selection is an important task for robot navigation. Many feature-matching techniques have been proposed earlier, and this paper proposes an improved feature matching between successive video frames with the use of neural network methodology in order to reduce the computation time of feature matching. The features extracted are invariant to image scale and rotation, and different experiments were conducted to evaluate the performance of feature matching between successive video frames. The extracted features are assigned distance based on the Kinect technology that can be used by the robot in order to determine the path of navigation, along with obstacle detection applications.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3737575','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3737575"><span>Localizing Target Structures in Ultrasound Video</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kwitt, R.; Vasconcelos, N.; Razzaque, S.; Aylward, S.</p> <p>2013-01-01</p> <p>The problem of localizing specific anatomic structures using ultrasound (US) video is considered. This involves automatically determining when an US probe is acquiring images of a previously defined object of interest, during the course of an US examination. Localization using US is motivated by the increased availability of portable, low-cost US probes, which inspire applications where inexperienced personnel and even first-time users acquire US data that is then sent to experts for further assessment. This process is of particular interest for routine examinations in underserved populations as well as for patient triage after natural disasters and large-scale accidents, where experts may be in short supply. The proposed localization approach is motivated by research in the area of dynamic texture analysis and leverages several recent advances in the field of activity recognition. For evaluation, we introduce an annotated and publicly available database of US video, acquired on three phantoms. Several experiments reveal the challenges of applying video analysis approaches to US images and demonstrate that good localization performance is possible with the proposed solution. PMID:23746488</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4626555','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4626555"><span>Differential effects of wakeful rest, music and video game playing on working memory performance in the n-back task</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kuschpel, Maxim S.; Liu, Shuyan; Schad, Daniel J.; Heinzel, Stephan; Heinz, Andreas; Rapp, Michael A.</p> <p>2015-01-01</p> <p>The interruption of learning processes by breaks filled with diverse activities is common in everyday life. We investigated the effects of active computer gaming and passive relaxation (rest and music) breaks on working memory performance. Young adults were exposed to breaks involving (i) eyes-open resting, (ii) listening to music and (iii) playing the video game “Angry Birds” before performing the n-back working memory task. Based on linear mixed-effects modeling, we found that playing the “Angry Birds” video game during a short learning break led to a decline in task performance over the course of the task as compared to eyes-open resting and listening to music, although overall task performance was not impaired. This effect was associated with high levels of daily mind wandering and low self-reported ability to concentrate. These findings indicate that video games can negatively affect working memory performance over time when played in between learning tasks. We suggest further investigation of these effects because of their relevance to everyday activity. PMID:26579055</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010SPIE.7659E..0NW','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010SPIE.7659E..0NW"><span>Study of moving object detecting and tracking algorithm for video surveillance system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Tao; Zhang, Rongfu</p> <p>2010-10-01</p> <p>This paper describes a specific process of moving target detecting and tracking in the video surveillance.Obtain high-quality background is the key to achieving differential target detecting in the video surveillance.The paper is based on a block segmentation method to build clear background,and using the method of background difference to detecing moving target,after a series of treatment we can be extracted the more comprehensive object from original image,then using the smallest bounding rectangle to locate the object.In the video surveillance system, the delay of camera and other reasons lead to tracking lag,the model of Kalman filter based on template matching was proposed,using deduced and estimated capacity of Kalman,the center of smallest bounding rectangle for predictive value,predicted the position in the next moment may appare,followed by template matching in the region as the center of this position,by calculate the cross-correlation similarity of current image and reference image,can determine the best matching center.As narrowed the scope of searching,thereby reduced the searching time,so there be achieve fast-tracking.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26846228','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26846228"><span>Is time spent playing video games associated with mental health, cognitive and social skills in young children?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kovess-Masfety, Viviane; Keyes, Katherine; Hamilton, Ava; Hanson, Gregory; Bitfoi, Adina; Golitz, Dietmar; Koç, Ceren; Kuijpers, Rowella; Lesinskiene, Sigita; Mihova, Zlatka; Otten, Roy; Fermanian, Christophe; Pez, Ondine</p> <p>2016-03-01</p> <p>Video games are one of the favourite leisure activities of children; the influence on child health is usually perceived to be negative. The present study assessed the association between the amount of time spent playing video games and children mental health as well as cognitive and social skills. Data were drawn from the School Children Mental Health Europe project conducted in six European Union countries (youth ages 6-11, n = 3195). Child mental health was assessed by parents and teachers using the Strengths and Difficulties Questionnaire and by children themselves with the Dominic Interactive. Child video game usage was reported by the parents. Teachers evaluated academic functioning. Multivariable logistic regressions were used. 20 % of the children played video games more than 5 h per week. Factors associated with time spent playing video games included being a boy, being older, and belonging to a medium size family. Having a less educated, single, inactive, or psychologically distressed mother decreased time spent playing video games. Children living in Western European countries were significantly less likely to have high video game usage (9.66 vs 20.49 %) though this was not homogenous. Once adjusted for child age and gender, number of children, mothers age, marital status, education, employment status, psychological distress, and region, high usage was associated with 1.75 times the odds of high intellectual functioning (95 % CI 1.31-2.33), and 1.88 times the odds of high overall school competence (95 % CI 1.44-2.47). Once controlled for high usage predictors, there were no significant associations with any child self-reported or mother- or teacher-reported mental health problems. High usage was associated with decreases in peer relationship problems [OR 0.41 (0.2-0.86) and in prosocial deficits (0.23 (0.07, 0.81)]. Playing video games may have positive effects on young children. Understanding the mechanisms through which video game use may stimulate children should be further investigated.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24497779','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24497779"><span>Active Healthy Kids Canada's Position on Active Video Games for Children and Youth.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chaput, Jean-Philippe; Leblanc, Allana G; McFarlane, Allison; Colley, Rachel C; Thivel, David; Biddle, Stuart Jh; Maddison, Ralph; Leatherdale, Scott T; Tremblay, Mark S</p> <p>2013-12-01</p> <p>The effect of active video games (AVGs) on acute energy expenditure has previously been reported; however, the influence of AVGs on other health-related lifestyle indicators remains unclear. To address this knowledge gap, Active Healthy Kids Canada (AHKC) convened an international group of researchers to conduct a systematic review to understand whether AVGs should be promoted to increase physical activity and improve health indicators in children and youth (zero to 17 years of age). The present article outlines the process and outcomes of the development of the AHKC's position on active video games for children and youth. In light of the available evidence, AHKC does not recommend AVGs as a strategy to help children be more physically active. However, AVGs may exchange some sedentary time for light- to moderate-intensity physical activity, and there may be specific situations in which AVGs provide benefit (eg, motor skill development in special populations and rehabilitation).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018OptLE.104..244G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018OptLE.104..244G"><span>High-speed video capillaroscopy method for imaging and evaluation of moving red blood cells</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gurov, Igor; Volkov, Mikhail; Margaryants, Nikita; Pimenov, Aleksei; Potemkin, Andrey</p> <p>2018-05-01</p> <p>The video capillaroscopy system with high image recording rate to resolve moving red blood cells with velocity up to 5 mm/s into a capillary is considered. Proposed procedures of the recorded video sequence processing allow evaluating spatial capillary area, capillary diameter and central line with high accuracy and reliability independently on properties of individual capillary. Two-dimensional inter frame procedure is applied to find lateral shift of neighbor images in the blood flow area with moving red blood cells and to measure directly the blood flow velocity along a capillary central line. The developed method opens new opportunities for biomedical diagnostics, particularly, due to long-time continuous monitoring of red blood cells velocity into capillary. Spatio-temporal representation of capillary blood flow is considered. Experimental results of direct measurement of blood flow velocity into separate capillary as well as capillary net are presented and discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2040305','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2040305"><span>Randomized Controlled Evaluation of an Early Intervention to Prevent Post-Rape Psychopathology</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Resnick, Heidi; Acierno, Ron; Waldrop, Angela E.; King, Lynda; King, Daniel; Danielson, Carla; Ruggiero, Kenneth J.; Kilpatrick, Dean</p> <p>2007-01-01</p> <p>A randomized between-group design was used to evaluate efficacy of a video intervention to reduce PTSD and other mental health problems, implemented prior to the forensic medical exam conducted within 72 hours post-sexual assault. Participants were 140 female victims of sexual assault (68 video/72 nonvideo) ages 15 or older. Assessments were targeted for 6 weeks (Time 1) and 6 months (Time 2) post-assault. At Time 1, the intervention was associated with lower scores on measures of PTSD and depression among women with prior rape history relative to scores among women with prior rape history in the standard care condition. At Time 2, depression scores were also lower among those with a prior history who were in the video relative to standard care condition. Small effects indicating higher PTSD and BAI scores among women without a prior history in the video condition were observed at Time 1. Accelerated longitudinal growth curve analysis indicated a video x prior rape history interaction for PTSD, yielding four patterns of symptom trajectory over time. Women with a prior rape history in the video condition generally maintained the lowest level of symptoms. PMID:17585872</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23419898','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23419898"><span>The effects of an action video game on visual and affective information processing.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bailey, Kira; West, Robert</p> <p>2013-04-04</p> <p>Playing action video games can have beneficial effects on visuospatial cognition and negative effects on social information processing. However, these two effects have not been demonstrated in the same individuals in a single study. The current study used event-related brain potentials (ERPs) to examine the effects of playing an action or non-action video game on the processing of emotion in facial expression. The data revealed that 10h of playing an action or non-action video game had differential effects on the ERPs relative to a no-contact control group. Playing an action game resulted in two effects: one that reflected an increase in the amplitude of the ERPs following training over the right frontal and posterior regions that was similar for angry, happy, and neutral faces; and one that reflected a reduction in the allocation of attention to happy faces. In contrast, playing a non-action game resulted in changes in slow wave activity over the central-parietal and frontal regions that were greater for targets (i.e., angry and happy faces) than for non-targets (i.e., neutral faces). These data demonstrate that the contrasting effects of action video games on visuospatial and emotion processing occur in the same individuals following the same level of gaming experience. This observation leads to the suggestion that caution should be exercised when using action video games to modify visual processing, as this experience could also have unintended effects on emotion processing. Published by Elsevier B.V.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013SPIE.8663E..0QB','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013SPIE.8663E..0QB"><span>Vehicle-triggered video compression/decompression for fast and efficient searching in large video databases</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bulan, Orhan; Bernal, Edgar A.; Loce, Robert P.; Wu, Wencheng</p> <p>2013-03-01</p> <p>Video cameras are widely deployed along city streets, interstate highways, traffic lights, stop signs and toll booths by entities that perform traffic monitoring and law enforcement. The videos captured by these cameras are typically compressed and stored in large databases. Performing a rapid search for a specific vehicle within a large database of compressed videos is often required and can be a time-critical life or death situation. In this paper, we propose video compression and decompression algorithms that enable fast and efficient vehicle or, more generally, event searches in large video databases. The proposed algorithm selects reference frames (i.e., I-frames) based on a vehicle having been detected at a specified position within the scene being monitored while compressing a video sequence. A search for a specific vehicle in the compressed video stream is performed across the reference frames only, which does not require decompression of the full video sequence as in traditional search algorithms. Our experimental results on videos captured in a local road show that the proposed algorithm significantly reduces the search space (thus reducing time and computational resources) in vehicle search tasks within compressed video streams, particularly those captured in light traffic volume conditions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22679924','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22679924"><span>A content analysis of the portrayal of alcohol in televised music videos in New Zealand: changes over time.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sloane, Kate; Wilson, Nick; Imlach Gunasekara, Fiona</p> <p>2013-01-01</p> <p>We aimed to: (i) document the extent and nature of alcohol portrayal in televised music videos in New Zealand in 2010; and (ii) assess trends over time by comparing with a similar 2005 sample. We undertook a content analysis for references to alcohol in 861 music videos shown on a youth-orientated television channel in New Zealand. This was compared with a sample in 2005 (564 music videos on the same channel plus sampling from two other channels). The proportion of alcohol content in the music videos was slightly higher in 2010 than for the same channel in the 2005 sample (19.5% vs. 15.7%) but this difference was not statistically significant. Only in the genre 'Rhythm and Blues' was the increase over time significant (P = 0.015). In both studies, the portrayal of alcohol was significantly more common in music videos where the main artist was international (not from New Zealand). Furthermore, in the music videos with alcohol content, at least a third of the time, alcohol was shown being consumed and the main artist was involved with alcohol. In only 2% (in 2005) and 4% (in 2010) of these videos was the tone explicitly negative towards alcohol. In both these studies, the portrayal of alcohol was relatively common in music videos. Nevertheless, there are various ways that policy makers can denormalise alcohol in youth-orientated media such as music videos or to compensate via other alcohol control measures such as higher alcohol taxes. © 2012 Australasian Professional Society on Alcohol and other Drugs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010mtcc.book...67R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010mtcc.book...67R"><span>Mobile Video in Everyday Social Interactions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Reponen, Erika; Lehikoinen, Jaakko; Impiö, Jussi</p> <p></p> <p>Video recording has become a spontaneous everyday activity for many people, thanks to the video capabilities of modern mobile phones. Internet connectivity of mobile phones enables fluent sharing of captured material even real-time, which makes video an up-and-coming everyday interaction medium. In this article we discuss the effect of the video camera in the social environment, everyday life situations, mainly based on a study where four groups of people used digital video cameras in their normal settings. We also reflect on another study of ours, relating to real-time mobile video communication and discuss future views. The aim of our research is to understand the possibilities in the domain of mobile video. Live and delayed sharing seem to have their special characteristics, live video being used as a virtual window between places whereas delayed video usage has more scope for good-quality content. While this novel way of interacting via mobile video enables new social patterns, it also raises new concerns for privacy and trust between participating persons in all roles, largely due to the widely spreading possibilities of videos. Video in a social situation affects cameramen (who record), targets (who are recorded), passers-by (who are unintentionally in the situation), and the audience (who follow the videos or recording situations) but also the other way around, the participants affect the video by their varying and evolving personal and communicational motivations for recording.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1997SPIE.3022..154I','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1997SPIE.3022..154I"><span>Evolving discriminators for querying video sequences</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Iyengar, Giridharan; Lippman, Andrew B.</p> <p>1997-01-01</p> <p>In this paper we present a framework for content based query and retrieval of information from large video databases. This framework enables content based retrieval of video sequences by characterizing the sequences using motion, texture and colorimetry cues. This characterization is biologically inspired and results in a compact parameter space where every segment of video is represented by an 8 dimensional vector. Searching and retrieval is done in real- time with accuracy in this parameter space. Using this characterization, we then evolve a set of discriminators using Genetic Programming Experiments indicate that these discriminators are capable of analyzing and characterizing video. The VideoBook is able to search and retrieve video sequences with 92% accuracy in real-time. Experiments thus demonstrate that the characterization is capable of extracting higher level structure from raw pixel values.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>