Sample records for video image analysis

  1. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalisedcross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  2. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalised cross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  3. Innovative Solution to Video Enhancement

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  4. Data Visualization and Animation Lab (DVAL) overview

    NASA Technical Reports Server (NTRS)

    Stacy, Kathy; Vonofenheim, Bill

    1994-01-01

    The general capabilities of the Langley Research Center Data Visualization and Animation Laboratory is described. These capabilities include digital image processing, 3-D interactive computer graphics, data visualization and analysis, video-rate acquisition and processing of video images, photo-realistic modeling and animation, video report generation, and color hardcopies. A specialized video image processing system is also discussed.

  5. PIZZARO: Forensic analysis and restoration of image and video data.

    PubMed

    Kamenicky, Jan; Bartos, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozamsky, Adam; Saic, Stanislav; Sroubek, Filip; Sorel, Michal; Zita, Ales; Zitova, Barbara; Sima, Zdenek; Svarc, Petr; Horinek, Jan

    2016-07-01

    This paper introduces a set of methods for image and video forensic analysis. They were designed to help to assess image and video credibility and origin and to restore and increase image quality by diminishing unwanted blur, noise, and other possible artifacts. The motivation came from the best practices used in the criminal investigation utilizing images and/or videos. The determination of the image source, the verification of the image content, and image restoration were identified as the most important issues of which automation can facilitate criminalists work. Novel theoretical results complemented with existing approaches (LCD re-capture detection and denoising) were implemented in the PIZZARO software tool, which consists of the image processing functionality as well as of reporting and archiving functions to ensure the repeatability of image analysis procedures and thus fulfills formal aspects of the image/video analysis work. Comparison of new proposed methods with the state of the art approaches is shown. Real use cases are presented, which illustrate the functionality of the developed methods and demonstrate their applicability in different situations. The use cases as well as the method design were solved in tight cooperation of scientists from the Institute of Criminalistics, National Drug Headquarters of the Criminal Police and Investigation Service of the Police of the Czech Republic, and image processing experts from the Czech Academy of Sciences. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Vision-sensing image analysis for GTAW process control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, D.D.

    1994-11-01

    Image analysis of a gas tungsten arc welding (GTAW) process was completed using video images from a charge coupled device (CCD) camera inside a specially designed coaxial (GTAW) electrode holder. Video data was obtained from filtered and unfiltered images, with and without the GTAW arc present, showing weld joint features and locations. Data Translation image processing boards, installed in an IBM PC AT 386 compatible computer, and Media Cybernetics image processing software were used to investigate edge flange weld joint geometry for image analysis.

  7. Quantification of video-taped images in microcirculation research using inexpensive imaging software (Adobe Photoshop).

    PubMed

    Brunner, J; Krummenauer, F; Lehr, H A

    2000-04-01

    Study end-points in microcirculation research are usually video-taped images rather than numeric computer print-outs. Analysis of these video-taped images for the quantification of microcirculatory parameters usually requires computer-based image analysis systems. Most software programs for image analysis are custom-made, expensive, and limited in their applicability to selected parameters and study end-points. We demonstrate herein that an inexpensive, commercially available computer software (Adobe Photoshop), run on a Macintosh G3 computer with inbuilt graphic capture board provides versatile, easy to use tools for the quantification of digitized video images. Using images obtained by intravital fluorescence microscopy from the pre- and postischemic muscle microcirculation in the skinfold chamber model in hamsters, Photoshop allows simple and rapid quantification (i) of microvessel diameters, (ii) of the functional capillary density and (iii) of postischemic leakage of FITC-labeled high molecular weight dextran from postcapillary venules. We present evidence of the technical accuracy of the software tools and of a high degree of interobserver reliability. Inexpensive commercially available imaging programs (i.e., Adobe Photoshop) provide versatile tools for image analysis with a wide range of potential applications in microcirculation research.

  8. Digital image processing of bone - Problems and potentials

    NASA Technical Reports Server (NTRS)

    Morey, E. R.; Wronski, T. J.

    1980-01-01

    The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.

  9. Collaborative real-time motion video analysis by human observer and image exploitation algorithms

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2015-05-01

    Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.

  10. 2011 Tohoku tsunami video and TLS based measurements: hydrographs, currents, inundation flow velocities, and ship tracks

    NASA Astrophysics Data System (ADS)

    Fritz, H. M.; Phillips, D. A.; Okayasu, A.; Shimozono, T.; Liu, H.; Takeda, S.; Mohammed, F.; Skanavis, V.; Synolakis, C. E.; Takahashi, T.

    2012-12-01

    The March 11, 2011, magnitude Mw 9.0 earthquake off the coast of the Tohoku region caused catastrophic damage and loss of life in Japan. The mid-afternoon tsunami arrival combined with survivors equipped with cameras on top of vertical evacuation buildings provided spontaneous spatially and temporally resolved inundation recordings. This report focuses on the surveys at 9 tsunami eyewitness video recording locations in Myako, Kamaishi, Kesennuma and Yoriisohama along Japan's Sanriku coast and the subsequent video image calibration, processing, tsunami hydrograph and flow velocity analysis. Selected tsunami video recording sites were explored, eyewitnesses interviewed and some ground control points recorded during the initial tsunami reconnaissance in April, 2011. A follow-up survey in June, 2011 focused on terrestrial laser scanning (TLS) at locations with high quality eyewitness videos. We acquired precise topographic data using TLS at the video sites producing a 3-dimensional "point cloud" dataset. A camera mounted on the Riegl VZ-400 scanner yields photorealistic 3D images. Integrated GPS measurements allow accurate georeferencing. The original video recordings were recovered from eyewitnesses and the Japanese Coast Guard (JCG). The analysis of the tsunami videos follows an adapted four step procedure originally developed for the analysis of 2004 Indian Ocean tsunami videos at Banda Aceh, Indonesia (Fritz et al., 2006). The first step requires the calibration of the sector of view present in the eyewitness video recording based on ground control points measured in the LiDAR data. In a second step the video image motion induced by the panning of the video camera was determined from subsequent images by particle image velocimetry (PIV) applied to fixed objects. The third step involves the transformation of the raw tsunami video images from image coordinates to world coordinates with a direct linear transformation (DLT) procedure. Finally, the instantaneous tsunami surface current and flooding velocity vector maps are determined by applying the digital PIV analysis method to the rectified tsunami video images with floating debris clusters. Tsunami currents up to 11 m/s per second were measured in Kesennuma Bay making navigation impossible. Tsunami hydrographs are derived from the videos based on water surface elevations at surface piercing objects identified in the acquired topographic TLS data. Apart from a dominant tsunami crest the hydrograph at Kamaishi also reveals a subsequent draw down to -10m exposing the harbor bottom. In some cases ship moorings resist the main tsunami crest only to be broken by the extreme draw down and setting vessels a drift for hours. Further we discuss the complex effects of coastal structures on inundation and outflow hydrographs and flow velocities.;

  11. A novel Kalman filter based video image processing scheme for two-photon fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Sun, Wenqing; Huang, Xia; Li, Chunqiang; Xiao, Chuan; Qian, Wei

    2016-03-01

    Two-photon fluorescence microscopy (TPFM) is a perfect optical imaging equipment to monitor the interaction between fast moving viruses and hosts. However, due to strong unavoidable background noises from the culture, videos obtained by this technique are too noisy to elaborate this fast infection process without video image processing. In this study, we developed a novel scheme to eliminate background noises, recover background bacteria images and improve video qualities. In our scheme, we modified and implemented the following methods for both host and virus videos: correlation method, round identification method, tree-structured nonlinear filters, Kalman filters, and cell tracking method. After these procedures, most of noises were eliminated and host images were recovered with their moving directions and speed highlighted in the videos. From the analysis of the processed videos, 93% bacteria and 98% viruses were correctly detected in each frame on average.

  12. Compression Algorithm Analysis of In-Situ (S)TEM Video: Towards Automatic Event Detection and Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teuton, Jeremy R.; Griswold, Richard L.; Mehdi, Beata L.

    Precise analysis of both (S)TEM images and video are time and labor intensive processes. As an example, determining when crystal growth and shrinkage occurs during the dynamic process of Li dendrite deposition and stripping involves manually scanning through each frame in the video to extract a specific set of frames/images. For large numbers of images, this process can be very time consuming, so a fast and accurate automated method is desirable. Given this need, we developed software that uses analysis of video compression statistics for detecting and characterizing events in large data sets. This software works by converting the datamore » into a series of images which it compresses into an MPEG-2 video using the open source “avconv” utility [1]. The software does not use the video itself, but rather analyzes the video statistics from the first pass of the video encoding that avconv records in the log file. This file contains statistics for each frame of the video including the frame quality, intra-texture and predicted texture bits, forward and backward motion vector resolution, among others. In all, avconv records 15 statistics for each frame. By combining different statistics, we have been able to detect events in various types of data. We have developed an interactive tool for exploring the data and the statistics that aids the analyst in selecting useful statistics for each analysis. Going forward, an algorithm for detecting and possibly describing events automatically can be written based on statistic(s) for each data type.« less

  13. Automatic content-based analysis of georeferenced image data: Detection of Beggiatoa mats in seafloor video mosaics from the HÅkon Mosby Mud Volcano

    NASA Astrophysics Data System (ADS)

    Jerosch, K.; Lüdtke, A.; Schlüter, M.; Ioannidis, G. T.

    2007-02-01

    The combination of new underwater technology as remotely operating vehicles (ROVs), high-resolution video imagery, and software to compute georeferenced mosaics of the seafloor provides new opportunities for marine geological or biological studies and applications in offshore industry. Even during single surveys by ROVs or towed systems large amounts of images are compiled. While these underwater techniques are now well-engineered, there is still a lack of methods for the automatic analysis of the acquired image data. During ROV dives more than 4200 georeferenced video mosaics were compiled for the HÅkon Mosby Mud Volcano (HMMV). Mud volcanoes as HMMV are considered as significant source locations for methane characterised by unique chemoautotrophic communities as Beggiatoa mats. For the detection and quantification of the spatial distribution of Beggiatoa mats an automated image analysis technique was developed, which applies watershed transformation and relaxation-based labelling of pre-segmented regions. Comparison of the data derived by visual inspection of 2840 video images with the automated image analysis revealed similarities with a precision better than 90%. We consider this as a step towards a time-efficient and accurate analysis of seafloor images for computation of geochemical budgets and identification of habitats at the seafloor.

  14. Real-time color image processing for forensic fiber investigations

    NASA Astrophysics Data System (ADS)

    Paulsson, Nils

    1995-09-01

    This paper describes a system for automatic fiber debris detection based on color identification. The properties of the system are fast analysis and high selectivity, a necessity when analyzing forensic fiber samples. An ordinary investigation separates the material into well above 100,000 video images to analyze. The system is based on standard techniques such as CCD-camera, motorized sample table, and IBM-compatible PC/AT with add-on-boards for video frame digitalization and stepping motor control as the main parts. It is possible to operate the instrument at full video rate (25 image/s) with aid of the HSI-color system (hue- saturation-intensity) and software optimization. High selectivity is achieved by separating the analysis into several steps. The first step is fast direct color identification of objects in the analyzed video images and the second step analyzes detected objects with a more complex and time consuming stage of the investigation to identify single fiber fragments for subsequent analysis with more selective techniques.

  15. Video enhancement workbench: an operational real-time video image processing system

    NASA Astrophysics Data System (ADS)

    Yool, Stephen R.; Van Vactor, David L.; Smedley, Kirk G.

    1993-01-01

    Video image sequences can be exploited in real-time, giving analysts rapid access to information for military or criminal investigations. Video-rate dynamic range adjustment subdues fluctuations in image intensity, thereby assisting discrimination of small or low- contrast objects. Contrast-regulated unsharp masking enhances differentially shadowed or otherwise low-contrast image regions. Real-time removal of localized hotspots, when combined with automatic histogram equalization, may enhance resolution of objects directly adjacent. In video imagery corrupted by zero-mean noise, real-time frame averaging can assist resolution and location of small or low-contrast objects. To maximize analyst efficiency, lengthy video sequences can be screened automatically for low-frequency, high-magnitude events. Combined zoom, roam, and automatic dynamic range adjustment permit rapid analysis of facial features captured by video cameras recording crimes in progress. When trying to resolve small objects in murky seawater, stereo video places the moving imagery in an optimal setting for human interpretation.

  16. Analysis and segmentation of images in case of solving problems of detecting and tracing objects on real-time video

    NASA Astrophysics Data System (ADS)

    Ezhova, Kseniia; Fedorenko, Dmitriy; Chuhlamov, Anton

    2016-04-01

    The article deals with the methods of image segmentation based on color space conversion, and allow the most efficient way to carry out the detection of a single color in a complex background and lighting, as well as detection of objects on a homogeneous background. The results of the analysis of segmentation algorithms of this type, the possibility of their implementation for creating software. The implemented algorithm is very time-consuming counting, making it a limited application for the analysis of the video, however, it allows us to solve the problem of analysis of objects in the image if there is no dictionary of images and knowledge bases, as well as the problem of choosing the optimal parameters of the frame quantization for video analysis.

  17. Video bioinformatics analysis of human embryonic stem cell colony growth.

    PubMed

    Lin, Sabrina; Fonteno, Shawn; Satish, Shruthi; Bhanu, Bir; Talbot, Prue

    2010-05-20

    Because video data are complex and are comprised of many images, mining information from video material is difficult to do without the aid of computer software. Video bioinformatics is a powerful quantitative approach for extracting spatio-temporal data from video images using computer software to perform dating mining and analysis. In this article, we introduce a video bioinformatics method for quantifying the growth of human embryonic stem cells (hESC) by analyzing time-lapse videos collected in a Nikon BioStation CT incubator equipped with a camera for video imaging. In our experiments, hESC colonies that were attached to Matrigel were filmed for 48 hours in the BioStation CT. To determine the rate of growth of these colonies, recipes were developed using CL-Quant software which enables users to extract various types of data from video images. To accurately evaluate colony growth, three recipes were created. The first segmented the image into the colony and background, the second enhanced the image to define colonies throughout the video sequence accurately, and the third measured the number of pixels in the colony over time. The three recipes were run in sequence on video data collected in a BioStation CT to analyze the rate of growth of individual hESC colonies over 48 hours. To verify the truthfulness of the CL-Quant recipes, the same data were analyzed manually using Adobe Photoshop software. When the data obtained using the CL-Quant recipes and Photoshop were compared, results were virtually identical, indicating the CL-Quant recipes were truthful. The method described here could be applied to any video data to measure growth rates of hESC or other cells that grow in colonies. In addition, other video bioinformatics recipes can be developed in the future for other cell processes such as migration, apoptosis, and cell adhesion.

  18. 2011 Tohoku tsunami hydrographs, currents, flow velocities and ship tracks based on video and TLS measurements

    NASA Astrophysics Data System (ADS)

    Fritz, Hermann M.; Phillips, David A.; Okayasu, Akio; Shimozono, Takenori; Liu, Haijiang; Takeda, Seiichi; Mohammed, Fahad; Skanavis, Vassilis; Synolakis, Costas E.; Takahashi, Tomoyuki

    2013-04-01

    The March 11, 2011, magnitude Mw 9.0 earthquake off the Tohoku coast of Japan caused catastrophic damage and loss of life to a tsunami aware population. The mid-afternoon tsunami arrival combined with survivors equipped with cameras on top of vertical evacuation buildings provided fragmented spatially and temporally resolved inundation recordings. This report focuses on the surveys at 9 tsunami eyewitness video recording locations in Myako, Kamaishi, Kesennuma and Yoriisohama along Japan's Sanriku coast and the subsequent video image calibration, processing, tsunami hydrograph and flow velocity analysis. Selected tsunami video recording sites were explored, eyewitnesses interviewed and some ground control points recorded during the initial tsunami reconnaissance in April, 2011. A follow-up survey in June, 2011 focused on terrestrial laser scanning (TLS) at locations with high quality eyewitness videos. We acquired precise topographic data using TLS at the video sites producing a 3-dimensional "point cloud" dataset. A camera mounted on the Riegl VZ-400 scanner yields photorealistic 3D images. Integrated GPS measurements allow accurate georeferencing. The original video recordings were recovered from eyewitnesses and the Japanese Coast Guard (JCG). The analysis of the tsunami videos follows an adapted four step procedure originally developed for the analysis of 2004 Indian Ocean tsunami videos at Banda Aceh, Indonesia (Fritz et al., 2006). The first step requires the calibration of the sector of view present in the eyewitness video recording based on ground control points measured in the LiDAR data. In a second step the video image motion induced by the panning of the video camera was determined from subsequent images by particle image velocimetry (PIV) applied to fixed objects. The third step involves the transformation of the raw tsunami video images from image coordinates to world coordinates with a direct linear transformation (DLT) procedure. Finally, the instantaneous tsunami surface current and flooding velocity vector maps are determined by applying the digital PIV analysis method to the rectified tsunami video images with floating debris clusters. Tsunami currents up to 11 m/s were measured in Kesennuma Bay making navigation impossible (Fritz et al., 2012). Tsunami hydrographs are derived from the videos based on water surface elevations at surface piercing objects identified in the acquired topographic TLS data. Apart from a dominant tsunami crest the hydrograph at Kamaishi also reveals a subsequent draw down to minus 10m exposing the harbor bottom. In some cases ship moorings resist the main tsunami crest only to be broken by the extreme draw down and setting vessels a drift for hours. Further we discuss the complex effects of coastal structures on inundation and outflow hydrographs and flow velocities. Lastly a perspective on the recovery and reconstruction process is provided based on numerous revisits of identical sites between April 2011 and July 2012.

  19. Action recognition in depth video from RGB perspective: A knowledge transfer manner

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Xiao, Yang; Cao, Zhiguo; Fang, Zhiwen

    2018-03-01

    Different video modal for human action recognition has becoming a highly promising trend in the video analysis. In this paper, we propose a method for human action recognition from RGB video to Depth video using domain adaptation, where we use learned feature from RGB videos to do action recognition for depth videos. More specifically, we make three steps for solving this problem in this paper. First, different from image, video is more complex as it has both spatial and temporal information, in order to better encode this information, dynamic image method is used to represent each RGB or Depth video to one image, based on this, most methods for extracting feature in image can be used in video. Secondly, as video can be represented as image, so standard CNN model can be used for training and testing for videos, beside, CNN model can be also used for feature extracting as its powerful feature expressing ability. Thirdly, as RGB videos and Depth videos are belong to two different domains, in order to make two different feature domains has more similarity, domain adaptation is firstly used for solving this problem between RGB and Depth video, based on this, the learned feature from RGB video model can be directly used for Depth video classification. We evaluate the proposed method on one complex RGB-D action dataset (NTU RGB-D), and our method can have more than 2% accuracy improvement using domain adaptation from RGB to Depth action recognition.

  20. Design of a MATLAB(registered trademark) Image Comparison and Analysis Tool for Augmentation of the Results of the Ann Arbor Distortion Test

    DTIC Science & Technology

    2016-06-25

    The equipment used in this procedure includes: Ann Arbor distortion tester with 50-line grating reticule, IQeye 720 digital video camera with 12...and import them into MATLAB. In order to digitally capture images of the distortion in an optical sample, an IQeye 720 video camera with a 12... video camera and Ann Arbor distortion tester. Figure 8. Computer interface for capturing images seen by IQeye 720 camera. Once an image was

  1. Analysis of the IJCNN 2011 UTL Challenge

    DTIC Science & Technology

    2012-01-13

    large datasets from various application domains: handwriting recognition, image recognition, video processing, text processing, and ecology. The goal...http //clopinet.com/ul). We made available large datasets from various application domains handwriting recognition, image recognition, video...evaluation sets consist of 4096 examples each. Dataset Domain Features Sparsity Devel. Transf. AVICENNA Handwriting 120 0% 150205 50000 HARRY Video 5000 98.1

  2. Evaluation of a HDR image sensor with logarithmic response for mobile video-based applications

    NASA Astrophysics Data System (ADS)

    Tektonidis, Marco; Pietrzak, Mateusz; Monnin, David

    2017-10-01

    The performance of mobile video-based applications using conventional LDR (Low Dynamic Range) image sensors highly depends on the illumination conditions. As an alternative, HDR (High Dynamic Range) image sensors with logarithmic response are capable to acquire illumination-invariant HDR images in a single shot. We have implemented a complete image processing framework for a HDR sensor, including preprocessing methods (nonuniformity correction (NUC), cross-talk correction (CTC), and demosaicing) as well as tone mapping (TM). We have evaluated the HDR sensor for video-based applications w.r.t. the display of images and w.r.t. image analysis techniques. Regarding the display we have investigated the image intensity statistics over time, and regarding image analysis we assessed the number of feature correspondences between consecutive frames of temporal image sequences. For the evaluation we used HDR image data recorded from a vehicle on outdoor or combined outdoor/indoor itineraries, and we performed a comparison with corresponding conventional LDR image data.

  3. Video Imaging System Particularly Suited for Dynamic Gear Inspection

    NASA Technical Reports Server (NTRS)

    Broughton, Howard (Inventor)

    1999-01-01

    A digital video imaging system that captures the image of a single tooth of interest of a rotating gear is disclosed. The video imaging system detects the complete rotation of the gear and divide that rotation into discrete time intervals so that each tooth of interest of the gear is precisely determined when it is at a desired location that is illuminated in unison with a digital video camera so as to record a single digital image for each tooth. The digital images are available to provide instantaneous analysis of the tooth of interest, or to be stored and later provide images that yield a history that may be used to predict gear failure, such as gear fatigue. The imaging system is completely automated by a controlling program so that it may run for several days acquiring images without supervision from the user.

  4. Statistical analysis of subjective preferences for video enhancement

    NASA Astrophysics Data System (ADS)

    Woods, Russell L.; Satgunam, PremNandhini; Bronstad, P. Matthew; Peli, Eli

    2010-02-01

    Measuring preferences for moving video quality is harder than for static images due to the fleeting and variable nature of moving video. Subjective preferences for image quality can be tested by observers indicating their preference for one image over another. Such pairwise comparisons can be analyzed using Thurstone scaling (Farrell, 1999). Thurstone (1927) scaling is widely used in applied psychology, marketing, food tasting and advertising research. Thurstone analysis constructs an arbitrary perceptual scale for the items that are compared (e.g. enhancement levels). However, Thurstone scaling does not determine the statistical significance of the differences between items on that perceptual scale. Recent papers have provided inferential statistical methods that produce an outcome similar to Thurstone scaling (Lipovetsky and Conklin, 2004). Here, we demonstrate that binary logistic regression can analyze preferences for enhanced video.

  5. A system for the real-time display of radar and video images of targets

    NASA Technical Reports Server (NTRS)

    Allen, W. W.; Burnside, W. D.

    1990-01-01

    Described here is a software and hardware system for the real-time display of radar and video images for use in a measurement range. The main purpose is to give the reader a clear idea of the software and hardware design and its functions. This system is designed around a Tektronix XD88-30 graphics workstation, used to display radar images superimposed on video images of the actual target. The system's purpose is to provide a platform for tha analysis and documentation of radar images and their associated targets in a menu-driven, user oriented environment.

  6. NASA's Myriad Uses of Digital Video

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney; Lindblom, Walt; George, Sandy

    1999-01-01

    Since it's inception, NASA has created many of the most memorable images seen this Century. From the fuzzy video of Neil Armstrong taking that first step on the moon, to images of the Mars surface available to all on the internet, NASA has provided images to inspire a generation, all because a scientist or researcher had a requirement to see something unusual. Digital Television technology will give NASA unprecedented new tools for acquiring, analyzing, and distributing video. This paper will explore NASA's DTV future. The agency has a requirement to move video from one NASA Center to another, in real time. Specifics will be provided relating to the NASA video infrastructure, including video from the Space Shuttle and from the various Centers. A comparison of the pros and cons of interlace and progressive scanned images will be presented. Film is a major component of NASA's image acquisition for analysis usage. The future of film within the context of DTV will be explored.

  7. Analyzing crime scene videos

    NASA Astrophysics Data System (ADS)

    Cunningham, Cindy C.; Peloquin, Tracy D.

    1999-02-01

    Since late 1996 the Forensic Identification Services Section of the Ontario Provincial Police has been actively involved in state-of-the-art image capture and the processing of video images extracted from crime scene videos. The benefits and problems of this technology for video analysis are discussed. All analysis is being conducted on SUN Microsystems UNIX computers, networked to a digital disk recorder that is used for video capture. The primary advantage of this system over traditional frame grabber technology is reviewed. Examples from actual cases are presented and the successes and limitations of this approach are explored. Suggestions to companies implementing security technology plans for various organizations (banks, stores, restaurants, etc.) will be made. Future directions for this work and new technologies are also discussed.

  8. Information Acquisition, Analysis and Integration

    DTIC Science & Technology

    2016-08-03

    of sensing and processing, theory, applications, signal processing, image and video processing, machine learning , technology transfer. 16. SECURITY... learning . 5. Solved elegantly old problems like image and video debluring, intro- ducing new revolutionary approaches. 1 DISTRIBUTION A: Distribution...Polatkan, G. Sapiro, D. Blei, D. B. Dunson, and L. Carin, “ Deep learning with hierarchical convolution factor analysis,” IEEE 6 DISTRIBUTION A

  9. Music video shot segmentation using independent component analysis and keyframe extraction based on image complexity

    NASA Astrophysics Data System (ADS)

    Li, Wei; Chen, Ting; Zhang, Wenjun; Shi, Yunyu; Li, Jun

    2012-04-01

    In recent years, Music video data is increasing at an astonishing speed. Shot segmentation and keyframe extraction constitute a fundamental unit in organizing, indexing, retrieving video content. In this paper a unified framework is proposed to detect the shot boundaries and extract the keyframe of a shot. Music video is first segmented to shots by illumination-invariant chromaticity histogram in independent component (IC) analysis feature space .Then we presents a new metric, image complexity, to extract keyframe in a shot which is computed by ICs. Experimental results show the framework is effective and has a good performance.

  10. STS-107 Debris Characterization Using Re-entry Imaging

    NASA Technical Reports Server (NTRS)

    Raiche, George A.

    2009-01-01

    Analysis of amateur video of the early reentry phases of the Columbia accident is discussed. With poor video quality and little theoretical guidance, the analysis team estimated mass and acceleration ranges for the debris shedding events observed in the video. Camera calibration and optical performance issues are also described.

  11. a Sensor Aided H.264/AVC Video Encoder for Aerial Video Sequences with in the Loop Metadata Correction

    NASA Astrophysics Data System (ADS)

    Cicala, L.; Angelino, C. V.; Ruatta, G.; Baccaglini, E.; Raimondo, N.

    2015-08-01

    Unmanned Aerial Vehicles (UAVs) are often employed to collect high resolution images in order to perform image mosaicking and/or 3D reconstruction. Images are usually stored on board and then processed with on-ground desktop software. In such a way the computational load, and hence the power consumption, is moved on ground, leaving on board only the task of storing data. Such an approach is important in the case of small multi-rotorcraft UAVs because of their low endurance due to the short battery life. Images can be stored on board with either still image or video data compression. Still image system are preferred when low frame rates are involved, because video coding systems are based on motion estimation and compensation algorithms which fail when the motion vectors are significantly long and when the overlapping between subsequent frames is very small. In this scenario, UAVs attitude and position metadata from the Inertial Navigation System (INS) can be employed to estimate global motion parameters without video analysis. A low complexity image analysis can be still performed in order to refine the motion field estimated using only the metadata. In this work, we propose to use this refinement step in order to improve the position and attitude estimation produced by the navigation system in order to maximize the encoder performance. Experiments are performed on both simulated and real world video sequences.

  12. Comparison of Inter-Observer Variability and Diagnostic Performance of the Fifth Edition of BI-RADS for Breast Ultrasound of Static versus Video Images.

    PubMed

    Youk, Ji Hyun; Jung, Inkyung; Yoon, Jung Hyun; Kim, Sung Hun; Kim, You Me; Lee, Eun Hye; Jeong, Sun Hye; Kim, Min Jung

    2016-09-01

    Our aim was to compare the inter-observer variability and diagnostic performance of the Breast Imaging Reporting and Data System (BI-RADS) lexicon for breast ultrasound of static and video images. Ninety-nine breast masses visible on ultrasound examination from 95 women 19-81 y of age at five institutions were enrolled in this study. They were scheduled to undergo biopsy or surgery or had been stable for at least 2 y of ultrasound follow-up after benign biopsy results or typically benign findings. For each mass, representative long- and short-axis static ultrasound images were acquired; real-time long- and short-axis B-mode video images through the mass area were separately saved as cine clips. Each image was reviewed independently by five radiologists who were asked to classify ultrasound features according to the fifth edition of the BI-RADS lexicon. Inter-observer variability was assessed using kappa (κ) statistics. Diagnostic performance on static and video images was compared using the area under the receiver operating characteristic curve. No significant difference was found in κ values between static and video images for all descriptors, although κ values of video images were higher than those of static images for shape, orientation, margin and calcifications. After receiver operating characteristic curve analysis, the video images (0.83, range: 0.77-0.87) had higher areas under the curve than the static images (0.80, range: 0.75-0.83; p = 0.08). Inter-observer variability and diagnostic performance of video images was similar to that of static images on breast ultrasonography according to the new edition of BI-RADS. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  13. Beauty and thinness messages in children's media: a content analysis.

    PubMed

    Herbozo, Sylvia; Tantleff-Dunn, Stacey; Gokee-Larose, Jessica; Thompson, J Kevin

    2004-01-01

    Research suggests that young children have body image concerns, such as a desire for thinness and an avoidance of obesity. Surprisingly, few studies have investigated how children's body preferences and stereotypes are influenced by media aimed at children. In order to gain a better understanding of the content of such media, a content analysis was used to examine body image-related messages in popular children's videos and books. Results indicated that messages emphasizing the importance of physical appearance and portraying body stereotypes are present in many children's videos but relatively few books. Of the videos examined, the ones that exhibited the most body image-related messages were Cinderella and The Little Mermaid. Indian in the Cupboard and ET were the videos with the least number of body image-related messages. Of the books studied, the one with the highest number of body image-related messages was Rapunzel. Ginger and The Stinky Cheese Man were the only books studied that did not exhibit body image-related messages. Implications of an association of beauty and thinness in children's media are explored.

  14. Visual communications and image processing '92; Proceedings of the Meeting, Boston, MA, Nov. 18-20, 1992

    NASA Astrophysics Data System (ADS)

    Maragos, Petros

    The topics discussed at the conference include hierarchical image coding, motion analysis, feature extraction and image restoration, video coding, and morphological and related nonlinear filtering. Attention is also given to vector quantization, morphological image processing, fractals and wavelets, architectures for image and video processing, image segmentation, biomedical image processing, and model-based analysis. Papers are presented on affine models for motion and shape recovery, filters for directly detecting surface orientation in an image, tracking of unresolved targets in infrared imagery using a projection-based method, adaptive-neighborhood image processing, and regularized multichannel restoration of color images using cross-validation. (For individual items see A93-20945 to A93-20951)

  15. Chromatic Image Analysis For Quantitative Thermal Mapping

    NASA Technical Reports Server (NTRS)

    Buck, Gregory M.

    1995-01-01

    Chromatic image analysis system (CIAS) developed for use in noncontact measurements of temperatures on aerothermodynamic models in hypersonic wind tunnels. Based on concept of temperature coupled to shift in color spectrum for optical measurement. Video camera images fluorescence emitted by phosphor-coated model at two wavelengths. Temperature map of model then computed from relative brightnesses in video images of model at those wavelengths. Eliminates need for intrusive, time-consuming, contact temperature measurements by gauges, making it possible to map temperatures on complex surfaces in timely manner and at reduced cost.

  16. Exploring Techniques for Vision Based Human Activity Recognition: Methods, Systems, and Evaluation

    PubMed Central

    Xu, Xin; Tang, Jinshan; Zhang, Xiaolong; Liu, Xiaoming; Zhang, Hong; Qiu, Yimin

    2013-01-01

    With the wide applications of vision based intelligent systems, image and video analysis technologies have attracted the attention of researchers in the computer vision field. In image and video analysis, human activity recognition is an important research direction. By interpreting and understanding human activities, we can recognize and predict the occurrence of crimes and help the police or other agencies react immediately. In the past, a large number of papers have been published on human activity recognition in video and image sequences. In this paper, we provide a comprehensive survey of the recent development of the techniques, including methods, systems, and quantitative evaluation of the performance of human activity recognition. PMID:23353144

  17. A Web-Based Video Digitizing System for the Study of Projectile Motion.

    ERIC Educational Resources Information Center

    Chow, John W.; Carlton, Les G.; Ekkekakis, Panteleimon; Hay, James G.

    2000-01-01

    Discusses advantages of a video-based, digitized image system for the study and analysis of projectile motion in the physics laboratory. Describes the implementation of a web-based digitized video system. (WRM)

  18. The compressed average image intensity metric for stereoscopic video quality assessment

    NASA Astrophysics Data System (ADS)

    Wilczewski, Grzegorz

    2016-09-01

    The following article depicts insights towards design, creation and testing of a genuine metric designed for a 3DTV video quality evaluation. The Compressed Average Image Intensity (CAII) mechanism is based upon stereoscopic video content analysis, setting its core feature and functionality to serve as a versatile tool for an effective 3DTV service quality assessment. Being an objective type of quality metric it may be utilized as a reliable source of information about the actual performance of a given 3DTV system, under strict providers evaluation. Concerning testing and the overall performance analysis of the CAII metric, the following paper presents comprehensive study of results gathered across several testing routines among selected set of samples of stereoscopic video content. As a result, the designed method for stereoscopic video quality evaluation is investigated across the range of synthetic visual impairments injected into the original video stream.

  19. Minimally invasive surgical video analysis: a powerful tool for surgical training and navigation.

    PubMed

    Sánchez-González, P; Oropesa, I; Gómez, E J

    2013-01-01

    Analysis of minimally invasive surgical videos is a powerful tool to drive new solutions for achieving reproducible training programs, objective and transparent assessment systems and navigation tools to assist surgeons and improve patient safety. This paper presents how video analysis contributes to the development of new cognitive and motor training and assessment programs as well as new paradigms for image-guided surgery.

  20. Using Image Analysis to Explore Changes In Bacterial Mat Coverage at the Base of a Hydrothermal Vent within the Caldera of Axial Seamount

    NASA Astrophysics Data System (ADS)

    Knuth, F.; Crone, T. J.; Marburg, A.

    2017-12-01

    The Ocean Observatories Initiative's (OOI) Cabled Array is delivering real-time high-definition video data from an HD video camera (CAMHD), installed at the Mushroom hydrothermal vent in the ASHES hydrothermal vent field within the caldera of Axial Seamount, an active submarine volcano located approximately 450 kilometers off the coast of Washington at a depth of 1,542 m. Every three hours the camera pans, zooms and focuses in on nine distinct scenes of scientific interest across the vent, producing 14-minute-long videos during each run. This standardized video sampling routine enables scientists to programmatically analyze the content of the video using automated image analysis techniques. Each scene-specific time series dataset can service a wide range of scientific investigations, including the estimation of bacterial flux into the system by quantifying chemosynthetic bacterial clusters (floc) present in the water column, relating periodicity in hydrothermal vent fluid flow to earth tides, measuring vent chimney growth in response to changing hydrothermal fluid flow rates, or mapping the patterns of fauna colonization, distribution and composition across the vent over time. We are currently investigating the seventh scene in the sampling routine, focused on the bacterial mat covering the seafloor at the base of the vent. We quantify the change in bacterial mat coverage over time using image analysis techniques, and examine the relationship between mat coverage, fluid flow processes, episodic chimney collapse events, and other processes observed by Cabled Array instrumentation. This analysis is being conducted using cloud-enabled computer vision processing techniques, programmatic image analysis, and time-lapse video data collected over the course of the first CAMHD deployment, from November 2015 to July 2016.

  1. Temporal compressive imaging for video

    NASA Astrophysics Data System (ADS)

    Zhou, Qun; Zhang, Linxia; Ke, Jun

    2018-01-01

    In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.

  2. Experimental design and analysis of JND test on coded image/video

    NASA Astrophysics Data System (ADS)

    Lin, Joe Yuchieh; Jin, Lina; Hu, Sudeng; Katsavounidis, Ioannis; Li, Zhi; Aaron, Anne; Kuo, C.-C. Jay

    2015-09-01

    The visual Just-Noticeable-Difference (JND) metric is characterized by the detectable minimum amount of two visual stimuli. Conducting the subjective JND test is a labor-intensive task. In this work, we present a novel interactive method in performing the visual JND test on compressed image/video. JND has been used to enhance perceptual visual quality in the context of image/video compression. Given a set of coding parameters, a JND test is designed to determine the distinguishable quality level against a reference image/video, which is called the anchor. The JND metric can be used to save coding bitrates by exploiting the special characteristics of the human visual system. The proposed JND test is conducted using a binary-forced choice, which is often adopted to discriminate the difference in perception in a psychophysical experiment. The assessors are asked to compare coded image/video pairs and determine whether they are of the same quality or not. A bisection procedure is designed to find the JND locations so as to reduce the required number of comparisons over a wide range of bitrates. We will demonstrate the efficiency of the proposed JND test, report experimental results on the image and video JND tests.

  3. Video coding for next-generation surveillance systems

    NASA Astrophysics Data System (ADS)

    Klasen, Lena M.; Fahlander, Olov

    1997-02-01

    Video is used as recording media in surveillance system and also more frequently by the Swedish Police Force. Methods for analyzing video using an image processing system have recently been introduced at the Swedish National Laboratory of Forensic Science, and new methods are in focus in a research project at Linkoping University, Image Coding Group. The accuracy of the result of those forensic investigations often depends on the quality of the video recordings, and one of the major problems when analyzing videos from crime scenes is the poor quality of the recordings. Enhancing poor image quality might add manipulative or subjective effects and does not seem to be the right way of getting reliable analysis results. The surveillance system in use today is mainly based on video techniques, VHS or S-VHS, and the weakest link is the video cassette recorder, (VCR). Multiplexers for selecting one of many camera outputs for recording is another problem as it often filters the video signal, and recording is limited to only one of the available cameras connected to the VCR. A way to get around the problem of poor recording is to simultaneously record all camera outputs digitally. It is also very important to build such a system bearing in mind that image processing analysis methods becomes more important as a complement to the human eye. Using one or more cameras gives a large amount of data, and the need for data compression is more than obvious. Crime scenes often involve persons or moving objects, and the available coding techniques are more or less useful. Our goal is to propose a possible system, being the best compromise with respect to what needs to be recorded, movements in the recorded scene, loss of information and resolution etc., to secure the efficient recording of the crime and enable forensic analysis. The preventative effective of having a well functioning surveillance system and well established image analysis methods is not to be neglected. Aspects of this next generation of digital surveillance systems are discussed in this paper.

  4. Using Image Modelling to Teach Newton's Laws with the Ollie Trick

    ERIC Educational Resources Information Center

    Dias, Marco Adriano; Carvalho, Paulo Simeão; Vianna, Deise Miranda

    2016-01-01

    Image modelling is a video-based teaching tool that is a combination of strobe images and video analysis. This tool can enable a qualitative and a quantitative approach to the teaching of physics, in a much more engaging and appealling way than the traditional expositive practice. In a specific scenario shown in this paper, the Ollie trick, we…

  5. A Macintosh-Based Scientific Images Video Analysis System

    NASA Technical Reports Server (NTRS)

    Groleau, Nicolas; Friedland, Peter (Technical Monitor)

    1994-01-01

    A set of experiments was designed at MIT's Man-Vehicle Laboratory in order to evaluate the effects of zero gravity on the human orientation system. During many of these experiments, the movements of the eyes are recorded on high quality video cassettes. The images must be analyzed off-line to calculate the position of the eyes at every moment in time. To this aim, I have implemented a simple inexpensive computerized system which measures the angle of rotation of the eye from digitized video images. The system is implemented on a desktop Macintosh computer, processes one play-back frame per second and exhibits adequate levels of accuracy and precision. The system uses LabVIEW, a digital output board, and a video input board to control a VCR, digitize video images, analyze them, and provide a user friendly interface for the various phases of the process. The system uses the Concept Vi LabVIEW library (Graftek's Image, Meudon la Foret, France) for image grabbing and displaying as well as translation to and from LabVIEW arrays. Graftek's software layer drives an Image Grabber board from Neotech (Eastleigh, United Kingdom). A Colour Adapter box from Neotech provides adequate video signal synchronization. The system also requires a LabVIEW driven digital output board (MacADIOS II from GW Instruments, Cambridge, MA) controlling a slightly modified VCR remote control used mainly to advance the video tape frame by frame.

  6. Peri-operative imaging of cancer margins with reflectance confocal microscopy during Mohs micrographic surgery: feasibility of a video-mosaicing algorithm

    NASA Astrophysics Data System (ADS)

    Flores, Eileen; Yelamos, Oriol; Cordova, Miguel; Kose, Kivanc; Phillips, William; Rossi, Anthony; Nehal, Kishwer; Rajadhyaksha, Milind

    2017-02-01

    Reflectance confocal microscopy (RCM) imaging shows promise for guiding surgical treatment of skin cancers. Recent technological advancements such as the introduction of the handheld version of the reflectance confocal microscope, video acquisition and video-mosaicing have improved RCM as an emerging tool to evaluate cancer margins during routine surgical skin procedures such as Mohs micrographic surgery (MMS). Detection of residual non-melanoma skin cancer (NMSC) tumor during MMS is feasible, as demonstrated by the introduction of real-time perioperative imaging on patients in the surgical setting. Our study is currently testing the feasibility of a new mosaicing algorithm for perioperative RCM imaging of NMSC cancer margins on patients during MMS. We report progress toward imaging and image analysis on forty-five patients, who presented for MMS at the MSKCC Dermatology service. The first 10 patients were used as a training set to establish an RCM imaging algorithm, which was implemented on the remaining test set of 35 patients. RCM imaging, using 35% AlCl3 for nuclear contrast, was performed pre- and intra-operatively with the Vivascope 3000 (Caliber ID). Imaging was performed in quadrants in the wound, to simulate the Mohs surgeon's examination of pathology. Videos were taken at the epidermal and deep dermal margins. Our Mohs surgeons assessed all videos and video-mosaics for quality and correlation to histology. Overall, our RCM video-mosaicing algorithm is feasible. RCM videos and video-mosaics of the epidermal and dermal margins were found to be of clinically acceptable quality. Assessment of cancer margins was affected by type of NMSC, size and location. Among the test set of 35 patients, 83% showed acceptable imaging quality, resolution and contrast. Visualization of nuclear and cellular morphology of residual BCC/SCC tumor and normal skin features could be detected in the peripheral and deep dermal margins. We observed correlation between the RCM videos/video-mosaics and the corresponding histology in 32 lesions. Peri-operative RCM imaging shows promise for improved and faster detection of cancer margins and guiding MMS in the surgical setting.

  7. Deep Lake Explorer: Using citizen science to analyze underwater video from the Great Lakes

    EPA Science Inventory

    While underwater video collection technology continues to improve, advancements in underwater video analysis techniques have lagged. Crowdsourcing image interpretation using the Zooniverse platform has proven successful for many projects, but few projects to date have included vi...

  8. Visual content highlighting via automatic extraction of embedded captions on MPEG compressed video

    NASA Astrophysics Data System (ADS)

    Yeo, Boon-Lock; Liu, Bede

    1996-03-01

    Embedded captions in TV programs such as news broadcasts, documentaries and coverage of sports events provide important information on the underlying events. In digital video libraries, such captions represent a highly condensed form of key information on the contents of the video. In this paper we propose a scheme to automatically detect the presence of captions embedded in video frames. The proposed method operates on reduced image sequences which are efficiently reconstructed from compressed MPEG video and thus does not require full frame decompression. The detection, extraction and analysis of embedded captions help to capture the highlights of visual contents in video documents for better organization of video, to present succinctly the important messages embedded in the images, and to facilitate browsing, searching and retrieval of relevant clips.

  9. Blurry-frame detection and shot segmentation in colonoscopy videos

    NASA Astrophysics Data System (ADS)

    Oh, JungHwan; Hwang, Sae; Tavanapong, Wallapak; de Groen, Piet C.; Wong, Johnny

    2003-12-01

    Colonoscopy is an important screening procedure for colorectal cancer. During this procedure, the endoscopist visually inspects the colon. Human inspection, however, is not without error. We hypothesize that colonoscopy videos may contain additional valuable information missed by the endoscopist. Video segmentation is the first necessary step for the content-based video analysis and retrieval to provide efficient access to the important images and video segments from a large colonoscopy video database. Based on the unique characteristics of colonoscopy videos, we introduce a new scheme to detect and remove blurry frames, and segment the videos into shots based on the contents. Our experimental results show that the average precision and recall of the proposed scheme are over 90% for the detection of non-blurry images. The proposed method of blurry frame detection and shot segmentation is extensible to the videos captured from other endoscopic procedures such as upper gastrointestinal endoscopy, enteroscopy, cystoscopy, and laparoscopy.

  10. SSME propellant path leak detection real-time

    NASA Technical Reports Server (NTRS)

    Crawford, R. A.; Smith, L. M.

    1994-01-01

    Included are four documents that outline the technical aspects of the research performed on NASA Grant NAG8-140: 'A System for Sequential Step Detection with Application to Video Image Processing'; 'Leak Detection from the SSME Using Sequential Image Processing'; 'Digital Image Processor Specifications for Real-Time SSME Leak Detection'; and 'A Color Change Detection System for Video Signals with Applications to Spectral Analysis of Rocket Engine Plumes'.

  11. Application of automatic image analysis in wood science

    Treesearch

    Charles W. McMillin

    1982-01-01

    In this paper I describe an image analysis system and illustrate with examples the application of automatic quantitative measurement to wood science. Automatic image analysis, a powerful and relatively new technology, uses optical, video, electronic, and computer components to rapidly derive information from images with minimal operator interaction. Such instruments...

  12. Submillimeter video imaging with a superconducting bolometer array

    NASA Astrophysics Data System (ADS)

    Becker, Daniel Thomas

    Millimeter wavelength radiation holds promise for detection of security threats at a distance, including suicide bombers and maritime threats in poor weather. The high sensitivity of superconducting Transition Edge Sensor (TES) bolometers makes them ideal for passive imaging of thermal signals at millimeter and submillimeter wavelengths. I have built a 350 GHz video-rate imaging system using an array of feedhorn-coupled TES bolometers. The system operates at standoff distances of 16 m to 28 m with a measured spatial resolution of 1.4 cm (at 17 m). It currently contains one 251-detector sub-array, and can be expanded to contain four sub-arrays for a total of 1004 detectors. The system has been used to take video images that reveal the presence of weapons concealed beneath a shirt in an indoor setting. This dissertation describes the design, implementation and characterization of this system. It presents an overview of the challenges associated with standoff passive imaging and how these problems can be overcome through the use of large-format TES bolometer arrays. I describe the design of the system and cover the results of detector and optical characterization. I explain the procedure used to generate video images using the system, and present a noise analysis of those images. This analysis indicates that the Noise Equivalent Temperature Difference (NETD) of the video images is currently limited by artifacts of the scanning process. More sophisticated image processing algorithms can eliminate these artifacts and reduce the NETD to 100 mK, which is the target value for the most demanding passive imaging scenarios. I finish with an overview of future directions for this system.

  13. A method of intentional movement estimation of oblique small-UAV videos stabilized based on homography model

    NASA Astrophysics Data System (ADS)

    Guo, Shiyi; Mai, Ying; Zhao, Hongying; Gao, Pengqi

    2013-05-01

    The airborne video streams of small-UAVs are commonly plagued with distractive jittery and shaking motions, disorienting rotations, noisy and distorted images and other unwanted movements. These problems collectively make it very difficult for observers to obtain useful information from the video. Due to the small payload of small-UAVs, it is a priority to improve the image quality by means of electronic image stabilization. But when small-UAV makes a turn, affected by the flight characteristics of it, the video is easy to become oblique. This brings a lot of difficulties to electronic image stabilization technology. Homography model performed well in the oblique image motion estimation, while bringing great challenges to intentional motion estimation. Therefore, in this paper, we focus on solve the problem of the video stabilized when small-UAVs banking and turning. We attend to the small-UAVs fly along with an arc of a fixed turning radius. For this reason, after a series of experimental analysis on the flight characteristics and the path how small-UAVs turned, we presented a new method to estimate the intentional motion in which the path of the frame center was used to fit the video moving track. Meanwhile, the image sequences dynamic mosaic was done to make up for the limited field of view. At last, the proposed algorithm was carried out and validated by actual airborne videos. The results show that the proposed method is effective to stabilize the oblique video of small-UAVs.

  14. A video wireless capsule endoscopy system powered wirelessly: design, analysis and experiment

    NASA Astrophysics Data System (ADS)

    Pan, Guobing; Xin, Wenhui; Yan, Guozheng; Chen, Jiaoliao

    2011-06-01

    Wireless capsule endoscopy (WCE), as a relatively new technology, has brought about a revolution in the diagnosis of gastrointestinal (GI) tract diseases. However, the existing WCE systems are not widely applied in clinic because of the low frame rate and low image resolution. A video WCE system based on a wireless power supply is developed in this paper. This WCE system consists of a video capsule endoscope (CE), a wireless power transmission device, a receiving box and an image processing station. Powered wirelessly, the video CE has the abilities of imaging the GI tract and transmitting the images wirelessly at a frame rate of 30 frames per second (f/s). A mathematical prototype was built to analyze the power transmission system, and some experiments were performed to test the capability of energy transferring. The results showed that the wireless electric power supply system had the ability to transfer more than 136 mW power, which was enough for the working of a video CE. In in vitro experiments, the video CE produced clear images of the small intestine of a pig with the resolution of 320 × 240, and transmitted NTSC format video outside the body. Because of the wireless power supply, the video WCE system with high frame rate and high resolution becomes feasible, and provides a novel solution for the diagnosis of the GI tract in clinic.

  15. VQone MATLAB toolbox: A graphical experiment builder for image and video quality evaluations: VQone MATLAB toolbox.

    PubMed

    Nuutinen, Mikko; Virtanen, Toni; Rummukainen, Olli; Häkkinen, Jukka

    2016-03-01

    This article presents VQone, a graphical experiment builder, written as a MATLAB toolbox, developed for image and video quality ratings. VQone contains the main elements needed for the subjective image and video quality rating process. This includes building and conducting experiments and data analysis. All functions can be controlled through graphical user interfaces. The experiment builder includes many standardized image and video quality rating methods. Moreover, it enables the creation of new methods or modified versions from standard methods. VQone is distributed free of charge under the terms of the GNU general public license and allows code modifications to be made so that the program's functions can be adjusted according to a user's requirements. VQone is available for download from the project page (http://www.helsinki.fi/psychology/groups/visualcognition/).

  16. A gradient method for the quantitative analysis of cell movement and tissue flow and its application to the analysis of multicellular Dictyostelium development.

    PubMed

    Siegert, F; Weijer, C J; Nomura, A; Miike, H

    1994-01-01

    We describe the application of a novel image processing method, which allows quantitative analysis of cell and tissue movement in a series of digitized video images. The result is a vector velocity field showing average direction and velocity of movement for every pixel in the frame. We apply this method to the analysis of cell movement during different stages of the Dictyostelium developmental cycle. We analysed time-lapse video recordings of cell movement in single cells, mounds and slugs. The program can correctly assess the speed and direction of movement of either unlabelled or labelled cells in a time series of video images depending on the illumination conditions. Our analysis of cell movement during multicellular development shows that the entire morphogenesis of Dictyostelium is characterized by rotational cell movement. The analysis of cell and tissue movement by the velocity field method should be applicable to the analysis of morphogenetic processes in other systems such as gastrulation and neurulation in vertebrate embryos.

  17. Phytoplankton Imaging and Analysis System: Instrumentation for Field and Laboratory Acquisition, Analysis and WWW/LAN-Based Sharing of Marine Phytoplankton Data (DURIP)

    DTIC Science & Technology

    2000-09-30

    networks (LAN), (3) quantifying size, shape, and other parameters of plankton cells and colonies via image analysis and image reconstruction, and (4) creating educational materials (e.g. lectures, videos etc.).

  18. Visual analysis of trash bin processing on garbage trucks in low resolution video

    NASA Astrophysics Data System (ADS)

    Sidla, Oliver; Loibner, Gernot

    2015-03-01

    We present a system for trash can detection and counting from a camera which is mounted on a garbage collection truck. A working prototype has been successfully implemented and tested with several hours of real-world video. The detection pipeline consists of HOG detectors for two trash can sizes, and meanshift tracking and low level image processing for the analysis of the garbage disposal process. Considering the harsh environment and unfavorable imaging conditions, the process works already good enough so that very useful measurements from video data can be extracted. The false positive/false negative rate of the full processing pipeline is about 5-6% at fully automatic operation. Video data of a full day (about 8 hrs) can be processed in about 30 minutes on a standard PC.

  19. RAPID: A random access picture digitizer, display, and memory system

    NASA Technical Reports Server (NTRS)

    Yakimovsky, Y.; Rayfield, M.; Eskenazi, R.

    1976-01-01

    RAPID is a system capable of providing convenient digital analysis of video data in real-time. It has two modes of operation. The first allows for continuous digitization of an EIA RS-170 video signal. Each frame in the video signal is digitized and written in 1/30 of a second into RAPID's internal memory. The second mode leaves the content of the internal memory independent of the current input video. In both modes of operation the image contained in the memory is used to generate an EIA RS-170 composite video output signal representing the digitized image in the memory so that it can be displayed on a monitor.

  20. Joint Attributes and Event Analysis for Multimedia Event Detection.

    PubMed

    Ma, Zhigang; Chang, Xiaojun; Xu, Zhongwen; Sebe, Nicu; Hauptmann, Alexander G

    2017-06-15

    Semantic attributes have been increasingly used the past few years for multimedia event detection (MED) with promising results. The motivation is that multimedia events generally consist of lower level components such as objects, scenes, and actions. By characterizing multimedia event videos with semantic attributes, one could exploit more informative cues for improved detection results. Much existing work obtains semantic attributes from images, which may be suboptimal for video analysis since these image-inferred attributes do not carry dynamic information that is essential for videos. To address this issue, we propose to learn semantic attributes from external videos using their semantic labels. We name them video attributes in this paper. In contrast with multimedia event videos, these external videos depict lower level contents such as objects, scenes, and actions. To harness video attributes, we propose an algorithm established on a correlation vector that correlates them to a target event. Consequently, we could incorporate video attributes latently as extra information into the event detector learnt from multimedia event videos in a joint framework. To validate our method, we perform experiments on the real-world large-scale TRECVID MED 2013 and 2014 data sets and compare our method with several state-of-the-art algorithms. The experiments show that our method is advantageous for MED.

  1. Integrated approach to multimodal media content analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Kuo, C.-C. Jay

    1999-12-01

    In this work, we present a system for the automatic segmentation, indexing and retrieval of audiovisual data based on the combination of audio, visual and textural content analysis. The video stream is demultiplexed into audio, image and caption components. Then, a semantic segmentation of the audio signal based on audio content analysis is conducted, and each segment is indexed as one of the basic audio types. The image sequence is segmented into shots based on visual information analysis, and keyframes are extracted from each shot. Meanwhile, keywords are detected from the closed caption. Index tables are designed for both linear and non-linear access to the video. It is shown by experiments that the proposed methods for multimodal media content analysis are effective. And that the integrated framework achieves satisfactory results for video information filtering and retrieval.

  2. Analysis of the color rendition of flexible endoscopes

    NASA Astrophysics Data System (ADS)

    Murphy, Edward M.; Hegarty, Francis J.; McMahon, Barry P.; Boyle, Gerard

    2003-03-01

    Endoscopes are imaging devices routinely used for the diagnosis of disease within the human digestive tract. Light is transmitted into the body cavity via incoherent fibreoptic bundles and is controlled by a light feedback system. Fibreoptic endoscopes use coherent fibreoptic bundles to provide the clinician with an image. It is also possible to couple fibreoptic endoscopes to a clip-on video camera. Video endoscopes consist of a small CCD camera, which is inserted into gastrointestinal tract, and associated image processor to convert the signal to analogue RGB video signals. Images from both types of endoscope are displayed on standard video monitors. Diagnosis is dependent upon being able to determine changes in the structure and colour of tissues and biological fluids, and therefore is dependent upon the ability of the endoscope to reproduce the colour of these tissues and fluids with fidelity. This study investigates the colour reproduction of flexible optical and video endoscopes. Fibreoptic and video endoscopes alter image colour characteristics in different ways. The colour rendition of fibreoptic endoscopes was assessed by coupling them to a video camera and applying video colorimetric techniques. These techniques were then used on video endoscopes to assess how the colour rendition of video endoscopes compared with that of optical endoscopes. In both cases results were obtained at fixed illumination settings. Video endoscopes were then assessed with varying levels of illumination. Initial results show that at constant luminance endoscopy systems introduce non-linear shifts in colour. Techniques for examining how this colour shift varies with illumination intensity were developed and both methodology and results will be presented. We conclude that more rigorous quality assurance is required to reduce colour error and are developing calibration procedures applicable to medical endoscopes.

  3. All Source Sensor Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    - PNNL, Harold Trease

    2012-10-10

    ASSA is a software application that processes binary data into summarized index tables that can be used to organize features contained within the data. ASSA's index tables can also be used to search for user specified features. ASSA is designed to organize and search for patterns in unstructured binary data streams or archives, such as video, images, audio, and network traffic. ASSA is basically a very general search engine used to search for any pattern in any binary data stream. It has uses in video analytics, image analysis, audio analysis, searching hard-drives, monitoring network traffic, etc.

  4. Utilizing ISS Camera Systems for Scientific Analysis of Lightning Characteristics and Comparison with ISS-LIS and GLM

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Lang, Timothy J.; Leake, Skye; Runco, Mario, Jr.; Blakeslee, Richard J.

    2017-01-01

    Video and still frame images from cameras aboard the International Space Station (ISS) are used to inspire, educate, and provide a unique vantage point from low-Earth orbit that is second to none; however, these cameras have overlooked capabilities for contributing to scientific analysis of the Earth and near-space environment. The goal of this project is to study how geo referenced video/images from available ISS camera systems can be useful for scientific analysis, using lightning properties as a demonstration.

  5. Predictive Displays for High Latency Teleoperation

    DTIC Science & Technology

    2016-08-04

    PREDICTIVE DISPLAYS FOR HIGH LATENCY TELEOPERATION” Analysis of existing approach 3 C om m s. C hannel Vehicle OCU D Throttle, Steer, Brake D Video ...presents opportunity mitigate outgoing latency. • Video is not governed by physics, however, video is dependent on the state of the vehicle, which...Commands, estimates UDP: H.264 Video UDP: Vehicle state • C++ implementation • 2 threads • OpenCV for image manipulation • FFMPEG for video decoding

  6. Ranging Apparatus and Method Implementing Stereo Vision System

    NASA Technical Reports Server (NTRS)

    Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.

  7. Astrometric and Photometric Analysis of the September 2008 ATV-1 Re-Entry Event

    NASA Technical Reports Server (NTRS)

    Mulrooney, Mark K.; Barker, Edwin S.; Maley, Paul D.; Beaulieu, Kevin R.; Stokely, Christopher L.

    2008-01-01

    NASA utilized Image Intensified Video Cameras for ATV data acquisition from a jet flying at 12.8 km. Afterwards the video was digitized and then analyzed with a modified commercial software package, Image Systems Trackeye. Astrometric results were limited by saturation, plate scale, and imposed linear plate solution based on field reference stars. Time-dependent fragment angular trajectories, velocities, accelerations, and luminosities were derived in each video segment. It was evident that individual fragments behave differently. Photometric accuracy was insufficient to confidently assess correlations between luminosity and fragment spatial behavior (velocity, deceleration). Use of high resolution digital video cameras in future should remedy this shortcoming.

  8. Image sequence analysis workstation for multipoint motion analysis

    NASA Astrophysics Data System (ADS)

    Mostafavi, Hassan

    1990-08-01

    This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.

  9. Objective analysis of image quality of video image capture systems

    NASA Astrophysics Data System (ADS)

    Rowberg, Alan H.

    1990-07-01

    As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give horizontal or vertical streaking. While many of these results are significant from an engineering standpoint alone, there are clinical implications and some anatomy or pathology may not be visualized if an image capture system is used improperly.

  10. Evaluation of lens distortion errors using an underwater camera system for video-based motion analysis

    NASA Technical Reports Server (NTRS)

    Poliner, Jeffrey; Fletcher, Lauren; Klute, Glenn K.

    1994-01-01

    Video-based motion analysis systems are widely employed to study human movement, using computers to capture, store, process, and analyze video data. This data can be collected in any environment where cameras can be located. One of the NASA facilities where human performance research is conducted is the Weightless Environment Training Facility (WETF), a pool of water which simulates zero-gravity with neutral buoyance. Underwater video collection in the WETF poses some unique problems. This project evaluates the error caused by the lens distortion of the WETF cameras. A grid of points of known dimensions was constructed and videotaped using a video vault underwater system. Recorded images were played back on a VCR and a personal computer grabbed and stored the images on disk. These images were then digitized to give calculated coordinates for the grid points. Errors were calculated as the distance from the known coordinates of the points to the calculated coordinates. It was demonstrated that errors from lens distortion could be as high as 8 percent. By avoiding the outermost regions of a wide-angle lens, the error can be kept smaller.

  11. Photometric Calibration of Consumer Video Cameras

    NASA Technical Reports Server (NTRS)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.

  12. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2002-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  13. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2003-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  14. [Development of an original computer program FISHMet: use for molecular cytogenetic diagnosis and genome mapping by fluorescent in situ hybridization (FISH)].

    PubMed

    Iurov, Iu B; Khazatskiĭ, I A; Akindinov, V A; Dovgilov, L V; Kobrinskiĭ, B A; Vorsanova, S G

    2000-08-01

    Original software FISHMet has been developed and tried for improving the efficiency of diagnosis of hereditary diseases caused by chromosome aberrations and for chromosome mapping by fluorescent in situ hybridization (FISH) method. The program allows creation and analysis of pseudocolor chromosome images and hybridization signals in the Windows 95 system, allows computer analysis and editing of the results of pseudocolor hybridization in situ, including successive imposition of initial black-and-white images created using fluorescent filters (blue, green, and red), and editing of each image individually or of a summary pseudocolor image in BMP, TIFF, and JPEG formats. Components of image computer analysis system (LOMO, Leitz Ortoplan, and Axioplan fluorescent microscopes, COHU 4910 and Sanyo VCB-3512P CCD cameras, Miro-Video, Scion LG-3 and VG-5 image capture maps, and Pentium 100 and Pentium 200 computers) and specialized software for image capture and visualization (Scion Image PC and Video-Cup) have been used with good results in the study.

  15. Application of Integral Optical Flow for Determining Crowd Movement from Video Images Obtained Using Video Surveillance Systems

    NASA Astrophysics Data System (ADS)

    Chen, H.; Ye, Sh.; Nedzvedz, O. V.; Ablameyko, S. V.

    2018-03-01

    Study of crowd movement is an important practical problem, and its solution is used in video surveillance systems for preventing various emergency situations. In the general case, a group of fast-moving people is of more interest than a group of stationary or slow-moving people. We propose a new method for crowd movement analysis using a video sequence, based on integral optical flow. We have determined several characteristics of a moving crowd such as density, speed, direction of motion, symmetry, and in/out index. These characteristics are used for further analysis of a video scene.

  16. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding

    PubMed Central

    Xiao, Rui; Gao, Junbin; Bossomaier, Terry

    2016-01-01

    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102

  17. State of the "art": a taxonomy of artistic stylization techniques for images and video.

    PubMed

    Kyprianidis, Jan Eric; Collomosse, John; Wang, Tinghuai; Isenberg, Tobias

    2013-05-01

    This paper surveys the field of nonphotorealistic rendering (NPR), focusing on techniques for transforming 2D input (images and video) into artistically stylized renderings. We first present a taxonomy of the 2D NPR algorithms developed over the past two decades, structured according to the design characteristics and behavior of each technique. We then describe a chronology of development from the semiautomatic paint systems of the early nineties, through to the automated painterly rendering systems of the late nineties driven by image gradient analysis. Two complementary trends in the NPR literature are then addressed, with reference to our taxonomy. First, the fusion of higher level computer vision and NPR, illustrating the trends toward scene analysis to drive artistic abstraction and diversity of style. Second, the evolution of local processing approaches toward edge-aware filtering for real-time stylization of images and video. The survey then concludes with a discussion of open challenges for 2D NPR identified in recent NPR symposia, including topics such as user and aesthetic evaluation.

  18. EBLAST: an efficient high-compression image transformation 3. application to Internet image and video transmission

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.

    2001-12-01

    A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.

  19. Class Energy Image Analysis for Video Sensor-Based Gait Recognition: A Review

    PubMed Central

    Lv, Zhuowen; Xing, Xianglei; Wang, Kejun; Guan, Donghai

    2015-01-01

    Gait is a unique perceptible biometric feature at larger distances, and the gait representation approach plays a key role in a video sensor-based gait recognition system. Class Energy Image is one of the most important gait representation methods based on appearance, which has received lots of attentions. In this paper, we reviewed the expressions and meanings of various Class Energy Image approaches, and analyzed the information in the Class Energy Images. Furthermore, the effectiveness and robustness of these approaches were compared on the benchmark gait databases. We outlined the research challenges and provided promising future directions for the field. To the best of our knowledge, this is the first review that focuses on Class Energy Image. It can provide a useful reference in the literature of video sensor-based gait representation approach. PMID:25574935

  20. Method for radiometric calibration of an endoscope's camera and light source

    NASA Astrophysics Data System (ADS)

    Rai, Lav; Higgins, William E.

    2008-03-01

    An endoscope is a commonly used instrument for performing minimally invasive visual examination of the tissues inside the body. A physician uses the endoscopic video images to identify tissue abnormalities. The images, however, are highly dependent on the optical properties of the endoscope and its orientation and location with respect to the tissue structure. The analysis of endoscopic video images is, therefore, purely subjective. Studies suggest that the fusion of endoscopic video images (providing color and texture information) with virtual endoscopic views (providing structural information) can be useful for assessing various pathologies for several applications: (1) surgical simulation, training, and pedagogy; (2) the creation of a database for pathologies; and (3) the building of patient-specific models. Such fusion requires both geometric and radiometric alignment of endoscopic video images in the texture space. Inconsistent estimates of texture/color of the tissue surface result in seams when multiple endoscopic video images are combined together. This paper (1) identifies the endoscope-dependent variables to be calibrated for objective and consistent estimation of surface texture/color and (2) presents an integrated set of methods to measure them. Results show that the calibration method can be successfully used to estimate objective color/texture values for simple planar scenes, whereas uncalibrated endoscopes performed very poorly for the same tests.

  1. Two-dimensional thermal video analysis of offshore bird and bat flight

    DOE PAGES

    Matzner, Shari; Cullinan, Valerie I.; Duberstein, Corey A.

    2015-09-11

    Thermal infrared video can provide essential information about bird and bat presence and activity for risk assessment studies, but the analysis of recorded video can be time-consuming and may not extract all of the available information. Automated processing makes continuous monitoring over extended periods of time feasible, and maximizes the information provided by video. This is especially important for collecting data in remote locations that are difficult for human observers to access, such as proposed offshore wind turbine sites. We present guidelines for selecting an appropriate thermal camera based on environmental conditions and the physical characteristics of the target animals.more » We developed new video image processing algorithms that automate the extraction of bird and bat flight tracks from thermal video, and that characterize the extracted tracks to support animal identification and behavior inference. The algorithms use a video peak store process followed by background masking and perceptual grouping to extract flight tracks. The extracted tracks are automatically quantified in terms that could then be used to infer animal type and possibly behavior. The developed automated processing generates results that are reproducible and verifiable, and reduces the total amount of video data that must be retained and reviewed by human experts. Finally, we suggest models for interpreting thermal imaging information.« less

  2. Two-dimensional thermal video analysis of offshore bird and bat flight

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzner, Shari; Cullinan, Valerie I.; Duberstein, Corey A.

    Thermal infrared video can provide essential information about bird and bat presence and activity for risk assessment studies, but the analysis of recorded video can be time-consuming and may not extract all of the available information. Automated processing makes continuous monitoring over extended periods of time feasible, and maximizes the information provided by video. This is especially important for collecting data in remote locations that are difficult for human observers to access, such as proposed offshore wind turbine sites. We present guidelines for selecting an appropriate thermal camera based on environmental conditions and the physical characteristics of the target animals.more » We developed new video image processing algorithms that automate the extraction of bird and bat flight tracks from thermal video, and that characterize the extracted tracks to support animal identification and behavior inference. The algorithms use a video peak store process followed by background masking and perceptual grouping to extract flight tracks. The extracted tracks are automatically quantified in terms that could then be used to infer animal type and possibly behavior. The developed automated processing generates results that are reproducible and verifiable, and reduces the total amount of video data that must be retained and reviewed by human experts. Finally, we suggest models for interpreting thermal imaging information.« less

  3. Application of an electronic image analyzer to dimensional measurements from neutron radiographs

    NASA Technical Reports Server (NTRS)

    Vary, A.; Bowles, K. J.

    1973-01-01

    Means of obtaining improved dimensional measurements from neutron radiographs of nuclear fuel elements are discussed. The use of video-electronic image analysis relative to edge definition in radiographic images is described. Based on this study, an edge definition criterion is proposed for overcoming image unsharpness effects in taking accurate diametral measurements from radiographs. An electronic density slicing method for automatic edge definition is described. Results of measurements made with video micrometry are compared with scanning microdensitometer and micrometric physical measurements. An image quality indicator for estimating photographic and geometric unsharpness is described.

  4. Observation of wave celerity evolution in the nearshore using digital video imagery

    NASA Astrophysics Data System (ADS)

    Yoo, J.; Fritz, H. M.; Haas, K. A.; Work, P. A.; Barnes, C. F.; Cho, Y.

    2008-12-01

    Celerity of incident waves in the nearshore is observed from oblique video imagery collected at Myrtle Beach, S.C.. The video camera covers the field view of length scales O(100) m. Celerity of waves propagating in shallow water including the surf zone is estimated by applying advanced image processing and analysis methods to the individual video images sampled at 3 Hz. Original image sequences are processed through video image frame differencing, directional low-pass image filtering to reduce the noise arising from foam in the surf zone. The breaking wave celerity is computed along a cross-shore transect from the wave crest tracks extracted by a Radon transform-based line detection method. The observed celerity from the nearshore video imagery is larger than the linear wave celerity computed from the measured water depths over the entire surf zone. Compared to the nonlinear shallow water wave equation (NSWE)-based celerity computed using the measured depths and wave heights, in general, the video-based celerity shows good agreements over the surf zone except the regions across the incipient wave breaking locations. In the regions across the breaker points, the observed wave celerity is even larger than the NSWE-based celerity due to the transition of wave crest shapes. The observed celerity using the video imagery can be used to monitor the nearshore geometry through depth inversion based on the nonlinear wave celerity theories. For this purpose, the exceeding celerity across the breaker points needs to be corrected accordingly compared to a nonlinear wave celerity theory applied.

  5. Video image stabilization and registration--plus

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor)

    2009-01-01

    A method of stabilizing a video image displayed in multiple video fields of a video sequence includes the steps of: subdividing a selected area of a first video field into nested pixel blocks; determining horizontal and vertical translation of each of the pixel blocks in each of the pixel block subdivision levels from the first video field to a second video field; and determining translation of the image from the first video field to the second video field by determining a change in magnification of the image from the first video field to the second video field in each of horizontal and vertical directions, and determining shear of the image from the first video field to the second video field in each of the horizontal and vertical directions.

  6. An Imaging And Graphics Workstation For Image Sequence Analysis

    NASA Astrophysics Data System (ADS)

    Mostafavi, Hassan

    1990-01-01

    This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.

  7. Fat stigmatization on YouTube: a content analysis.

    PubMed

    Hussin, Mallory; Frazier, Savannah; Thompson, J Kevin

    2011-01-01

    YouTube.com is an internet website that is viewed by two billion individuals daily, and thus may serve as the source of images and messages regarding weight acceptance or weight bias. In the current study, a targeted sample of YouTube videos that displayed fat stigmatization were content rated on a variety of video characteristics. The findings revealed that men were the target of fat stigmatization (62.1%) almost twice as often as women (36.4%). When there was an antagonist present in the video, the great majority of the time, the aggressor was male (88.5%) rather than female (7.7%). These findings indicate that men were the antagonist 11.5 times the rate of women, but they were only 1.7 times more often stigmatized. Future research avenues, including an experimental analysis of viewing stigmatizing videos on body image, are recommended. Copyright © 2010 Elsevier Ltd. All rights reserved.

  8. Video Traffic Analysis for Abnormal Event Detection

    DOT National Transportation Integrated Search

    2010-01-01

    We propose the use of video imaging sensors for the detection and classification of abnormal events to be used primarily for mitigation of traffic congestion. Successful detection of such events will allow for new road guidelines; for rapid deploymen...

  9. Video traffic analysis for abnormal event detection.

    DOT National Transportation Integrated Search

    2010-01-01

    We propose the use of video imaging sensors for the detection and classification of abnormal events to : be used primarily for mitigation of traffic congestion. Successful detection of such events will allow for : new road guidelines; for rapid deplo...

  10. Inferring consistent functional interaction patterns from natural stimulus FMRI data

    PubMed Central

    Sun, Jiehuan; Hu, Xintao; Huang, Xiu; Liu, Yang; Li, Kaiming; Li, Xiang; Han, Junwei; Guo, Lei

    2014-01-01

    There has been increasing interest in how the human brain responds to natural stimulus such as video watching in the neuroimaging field. Along this direction, this paper presents our effort in inferring consistent and reproducible functional interaction patterns under natural stimulus of video watching among known functional brain regions identified by task-based fMRI. Then, we applied and compared four statistical approaches, including Bayesian network modeling with searching algorithms: greedy equivalence search (GES), Peter and Clark (PC) analysis, independent multiple greedy equivalence search (IMaGES), and the commonly used Granger causality analysis (GCA), to infer consistent and reproducible functional interaction patterns among these brain regions. It is interesting that a number of reliable and consistent functional interaction patterns were identified by the GES, PC and IMaGES algorithms in different participating subjects when they watched multiple video shots of the same semantic category. These interaction patterns are meaningful given current neuroscience knowledge and are reasonably reproducible across different brains and video shots. In particular, these consistent functional interaction patterns are supported by structural connections derived from diffusion tensor imaging (DTI) data, suggesting the structural underpinnings of consistent functional interactions. Our work demonstrates that specific consistent patterns of functional interactions among relevant brain regions might reflect the brain's fundamental mechanisms of online processing and comprehension of video messages. PMID:22440644

  11. Tsunami Research driven by Survivor Observations: Sumatra 2004, Tohoku 2011 and the Lituya Bay Landslide (Plinius Medal Lecture)

    NASA Astrophysics Data System (ADS)

    Fritz, Hermann M.

    2014-05-01

    The 10th anniversary of the 2004 Indian Ocean tsunami recalls the advent of tsunami video recordings by eyewitnesses. The tsunami of December 26, 2004 severely affected Banda Aceh along the North tip of Sumatra (Indonesia) at a distance of 250 km from the epicenter of the Magnitude 9.0 earthquake. The tsunami flow velocity analysis focused on two survivor videos recorded within Banda Aceh more than 3km from the open ocean. The exact locations of the tsunami eyewitness video recordings were revisited to record camera calibration ground control points. The motion of the camera during the recordings was determined. The individual video images were rectified with a direct linear transformation (DLT). Finally a cross-correlation based particle image velocimetry (PIV) analysis was applied to the rectified video images to determine instantaneous tsunami flow velocity fields. The measured overland tsunami flow velocities were within the range of 2 to 5 m/s in downtown Banda Aceh, Indonesia. The March 11, 2011, magnitude Mw 9.0 earthquake off the coast of Japan caused catastrophic damage and loss of life. Fortunately many survivors at evacuation sites recorded countless tsunami videos with unprecedented spatial and temporal coverage. Numerous tsunami reconnaissance trips were conducted in Japan. This report focuses on the surveys at selected tsunami eyewitness video recording locations along Japan's Sanriku coast and the subsequent tsunami video image analysis. Locations with high quality survivor videos were visited, eyewitnesses interviewed and detailed site topography scanned with a terrestrial laser scanner (TLS). The analysis of the tsunami videos followed the four step procedure developed for the analysis of 2004 Indian Ocean tsunami videos at Banda Aceh. Tsunami currents up to 11 m/s were measured in Kesennuma Bay making navigation impossible. Further tsunami height and runup hydrographs are derived from the videos to discuss the complex effects of coastal structures on inundation and outflow flow velocities. Tsunamis generated by landslides and volcanic island collapses account for some of the most catastrophic events. On July 10, 1958, an earthquake Mw 8.3 along the Fairweather fault triggered a major subaerial landslide into Gilbert Inlet at the head of Lituya Bay on the south coast of Alaska. The landslide impacted the water at high speed generating a giant tsunami and the highest wave runup in recorded history. This event was observed by eyewitnesses on board the sole surviving fishing boat, which managed to ride the tsunami. The mega-tsunami runup to an elevation of 524 m caused total forest destruction and erosion down to bedrock on a spur ridge in direct prolongation of the slide axis. A cross-section of Gilbert Inlet was rebuilt in a two dimensional physical laboratory model. Particle image velocimetry (PIV) provided instantaneous velocity vector fields of decisive initial phase with landslide impact and wave generation as well as the runup on the headland. Three dimensional source and runup scenarios based on real world events are physically modeled in the NEES tsunami wave basin (TWB) at Oregon State University (OSU). The measured landslide and tsunami data serve to validate and advance numerical landslide tsunami models. This lecture encompasses multi-hazard aspects and implications of recent tsunami and cyclonic events around the world such as the November 2013 Typhoon Haiyan (Yolanda) in the Philippines.

  12. Tiny videos: a large data set for nonparametric video retrieval and frame classification.

    PubMed

    Karpenko, Alexandre; Aarabi, Parham

    2011-03-01

    In this paper, we present a large database of over 50,000 user-labeled videos collected from YouTube. We develop a compact representation called "tiny videos" that achieves high video compression rates while retaining the overall visual appearance of the video as it varies over time. We show that frame sampling using affinity propagation-an exemplar-based clustering algorithm-achieves the best trade-off between compression and video recall. We use this large collection of user-labeled videos in conjunction with simple data mining techniques to perform related video retrieval, as well as classification of images and video frames. The classification results achieved by tiny videos are compared with the tiny images framework [24] for a variety of recognition tasks. The tiny images data set consists of 80 million images collected from the Internet. These are the largest labeled research data sets of videos and images available to date. We show that tiny videos are better suited for classifying scenery and sports activities, while tiny images perform better at recognizing objects. Furthermore, we demonstrate that combining the tiny images and tiny videos data sets improves classification precision in a wider range of categories.

  13. Efficient Use of Video for 3d Modelling of Cultural Heritage Objects

    NASA Astrophysics Data System (ADS)

    Alsadik, B.; Gerke, M.; Vosselman, G.

    2015-03-01

    Currently, there is a rapid development in the techniques of the automated image based modelling (IBM), especially in advanced structure-from-motion (SFM) and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 - 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.

  14. Deep visual-semantic for crowded video understanding

    NASA Astrophysics Data System (ADS)

    Deng, Chunhua; Zhang, Junwen

    2018-03-01

    Visual-semantic features play a vital role for crowded video understanding. Convolutional Neural Networks (CNNs) have experienced a significant breakthrough in learning representations from images. However, the learning of visualsemantic features, and how it can be effectively extracted for video analysis, still remains a challenging task. In this study, we propose a novel visual-semantic method to capture both appearance and dynamic representations. In particular, we propose a spatial context method, based on the fractional Fisher vector (FV) encoding on CNN features, which can be regarded as our main contribution. In addition, to capture temporal context information, we also applied fractional encoding method on dynamic images. Experimental results on the WWW crowed video dataset demonstrate that the proposed method outperform the state of the art.

  15. Data simulation for the Lightning Imaging Sensor (LIS)

    NASA Technical Reports Server (NTRS)

    Boeck, William L.

    1991-01-01

    This project aims to build a data analysis system that will utilize existing video tape scenes of lightning as viewed from space. The resultant data will be used for the design and development of the Lightning Imaging Sensor (LIS) software and algorithm analysis. The desire for statistically significant metrics implies that a large data set needs to be analyzed. Before 1990 the quality and quantity of video was insufficient to build a usable data set. At this point in time, there is usable data from missions STS-34, STS-32, STS-31, STS-41, STS-37, and STS-39. During the summer of 1990, a manual analysis system was developed to demonstrate that the video analysis is feasible and to identify techniques to deduce information that was not directly available. Because the closed circuit television system used on the space shuttle was intended for documentary TV, the current value of the camera focal length and pointing orientation, which are needed for photoanalysis, are not included in the system data. A large effort was needed to discover ancillary data sources as well as develop indirect methods to estimate the necessary parameters. Any data system coping with full motion video faces an enormous bottleneck produced by the large data production rate and the need to move and store the digitized images. The manual system bypassed the video digitizing bottleneck by using a genlock to superimpose pixel coordinates on full motion video. Because the data set had to be obtained point by point by a human operating a computer mouse, the data output rate was small. The loan and subsequent acquisition of a Abekas digital frame store with a real time digitizer moved the bottleneck from data acquisition to a problem of data transfer and storage. The semi-automated analysis procedure was developed using existing equipment and is described. A fully automated system is described in the hope that the components may come on the market at reasonable prices in the next few years.

  16. Extraction of composite visual objects from audiovisual materials

    NASA Astrophysics Data System (ADS)

    Durand, Gwenael; Thienot, Cedric; Faudemay, Pascal

    1999-08-01

    An effective analysis of Visual Objects appearing in still images and video frames is required in order to offer fine grain access to multimedia and audiovisual contents. In previous papers, we showed how our method for segmenting still images into visual objects could improve content-based image retrieval and video analysis methods. Visual Objects are used in particular for extracting semantic knowledge about the contents. However, low-level segmentation methods for still images are not likely to extract a complex object as a whole but instead as a set of several sub-objects. For example, a person would be segmented into three visual objects: a face, hair, and a body. In this paper, we introduce the concept of Composite Visual Object. Such an object is hierarchically composed of sub-objects called Component Objects.

  17. Energy Efficient Image/Video Data Transmission on Commercial Multi-Core Processors

    PubMed Central

    Lee, Sungju; Kim, Heegon; Chung, Yongwha; Park, Daihee

    2012-01-01

    In transmitting image/video data over Video Sensor Networks (VSNs), energy consumption must be minimized while maintaining high image/video quality. Although image/video compression is well known for its efficiency and usefulness in VSNs, the excessive costs associated with encoding computation and complexity still hinder its adoption for practical use. However, it is anticipated that high-performance handheld multi-core devices will be used as VSN processing nodes in the near future. In this paper, we propose a way to improve the energy efficiency of image and video compression with multi-core processors while maintaining the image/video quality. We improve the compression efficiency at the algorithmic level or derive the optimal parameters for the combination of a machine and compression based on the tradeoff between the energy consumption and the image/video quality. Based on experimental results, we confirm that the proposed approach can improve the energy efficiency of the straightforward approach by a factor of 2∼5 without compromising image/video quality. PMID:23202181

  18. Full-frame video stabilization with motion inpainting.

    PubMed

    Matsushita, Yasuyuki; Ofek, Eyal; Ge, Weina; Tang, Xiaoou; Shum, Heung-Yeung

    2006-07-01

    Video stabilization is an important video enhancement technology which aims at removing annoying shaky motion from videos. We propose a practical and robust approach of video stabilization that produces full-frame stabilized videos with good visual quality. While most previous methods end up with producing smaller size stabilized videos, our completion method can produce full-frame videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, our method transfers and interpolates sharper image pixels of neighboring frames to increase the sharpness of the frame. The proposed video completion and deblurring methods enabled us to develop a complete video stabilizer which can naturally keep the original image quality in the stabilized videos. The effectiveness of our method is confirmed by extensive experiments over a wide variety of videos.

  19. A video method to study Drosophila sleep.

    PubMed

    Zimmerman, John E; Raizen, David M; Maycock, Matthew H; Maislin, Greg; Pack, Allan I

    2008-11-01

    To use video to determine the accuracy of the infrared beam-splitting method for measuring sleep in Drosophila and to determine the effect of time of day, sex, genotype, and age on sleep measurements. A digital image analysis method based on frame subtraction principle was developed to distinguish a quiescent from a moving fly. Data obtained using this method were compared with data obtained using the Drosophila Activity Monitoring System (DAMS). The location of the fly was identified based on its centroid location in the subtracted images. The error associated with the identification of total sleep using DAMS ranged from 7% to 95% and depended on genotype, sex, age, and time of day. The degree of the total sleep error was dependent on genotype during the daytime (P < 0.001) and was dependent on age during both the daytime and the nighttime (P < 0.001 for both). The DAMS method overestimated sleep bout duration during both the day and night, and the degree of these errors was genotype dependent (P < 0.001). Brief movements that occur during sleep bouts can be accurately identified using video. Both video and DAMS detected a homeostatic response to sleep deprivation. Video digital analysis is more accurate than DAMS in fly sleep measurements. In particular, conclusions drawn from DAMS measurements regarding daytime sleep and sleep architecture should be made with caution. Video analysis also permits the assessment of fly position and brief movements during sleep.

  20. Sensor Management for Tactical Surveillance Operations

    DTIC Science & Technology

    2007-11-01

    active and passive sonar for submarine and tor- pedo detection, and mine avoidance. [range, bearing] range 1.8 km to 55 km Active or Passive AN/SLQ-501...finding (DF) unit [bearing, classification] maximum range 1100 km Passive Cameras (day- light/ night- vision) ( video & still) Record optical and...infrared still images or motion video of events for near-real time assessment or long term analysis and archiving. Range is limited by the image resolution

  1. Registration of multiple video images to preoperative CT for image-guided surgery

    NASA Astrophysics Data System (ADS)

    Clarkson, Matthew J.; Rueckert, Daniel; Hill, Derek L.; Hawkes, David J.

    1999-05-01

    In this paper we propose a method which uses multiple video images to establish the pose of a CT volume with respect to video camera coordinates for use in image guided surgery. The majority of neurosurgical procedures require the neurosurgeon to relate the pre-operative MR/CT data to the intra-operative scene. Registration of 2D video images to the pre-operative 3D image enables a perspective projection of the pre-operative data to be overlaid onto the video image. Our registration method is based on image intensity and uses a simple iterative optimization scheme to maximize the mutual information between a video image and a rendering from the pre-operative data. Video images are obtained from a stereo operating microscope, with a field of view of approximately 110 X 80 mm. We have extended an existing information theoretical framework for 2D-3D registration, so that multiple video images can be registered simultaneously to the pre-operative data. Experiments were performed on video and CT images of a skull phantom. We took three video images, and our algorithm registered these individually to the 3D image. The mean projection error varied between 4.33 and 9.81 millimeters (mm), and the mean 3D error varied between 4.47 and 11.92 mm. Using our novel techniques we then registered five video views simultaneously to the 3D model. This produced an accurate and robust registration with a mean projection error of 0.68 mm and a mean 3D error of 1.05 mm.

  2. Video image processing to create a speed sensor

    DOT National Transportation Integrated Search

    1999-11-01

    Image processing has been applied to traffic analysis in recent years, with different goals. In the report, a new approach is presented for extracting vehicular speed information, given a sequence of real-time traffic images. We extract moving edges ...

  3. Repurposing video recordings for structure motion estimations

    NASA Astrophysics Data System (ADS)

    Khaloo, Ali; Lattanzi, David

    2016-04-01

    Video monitoring of public spaces is becoming increasingly ubiquitous, particularly near essential structures and facilities. During any hazard event that dynamically excites a structure, such as an earthquake or hurricane, proximal video cameras may inadvertently capture the motion time-history of the structure during the event. If this dynamic time-history could be extracted from the repurposed video recording it would become a valuable forensic analysis tool for engineers performing post-disaster structural evaluations. The difficulty is that almost all potential video cameras are not installed to monitor structure motions, leading to camera perspective distortions and other associated challenges. This paper presents a method for extracting structure motions from videos using a combination of computer vision techniques. Images from a video recording are first reprojected into synthetic images that eliminate perspective distortion, using as-built knowledge of a structure for calibration. The motion of the camera itself during an event is also considered. Optical flow, a technique for tracking per-pixel motion, is then applied to these synthetic images to estimate the building motion. The developed method was validated using the experimental records of the NEESHub earthquake database. The results indicate that the technique is capable of estimating structural motions, particularly the frequency content of the response. Further work will evaluate variants and alternatives to the optical flow algorithm, as well as study the impact of video encoding artifacts on motion estimates.

  4. Utilizing ISS Camera Systems for Scientific Analysis of Lightning Characteristics and comparison with ISS-LIS and GLM

    NASA Astrophysics Data System (ADS)

    Schultz, C. J.; Lang, T. J.; Leake, S.; Runco, M.; Blakeslee, R. J.

    2017-12-01

    Video and still frame images from cameras aboard the International Space Station (ISS) are used to inspire, educate, and provide a unique vantage point from low-Earth orbit that is second to none; however, these cameras have overlooked capabilities for contributing to scientific analysis of the Earth and near-space environment. The goal of this project is to study how georeferenced video/images from available ISS camera systems can be useful for scientific analysis, using lightning properties as a demonstration. Camera images from the crew cameras and high definition video from the Chiba University Meteor Camera were combined with lightning data from the National Lightning Detection Network (NLDN), ISS-Lightning Imaging Sensor (ISS-LIS), the Geostationary Lightning Mapper (GLM) and lightning mapping arrays. These cameras provide significant spatial resolution advantages ( 10 times or better) over ISS-LIS and GLM, but with lower temporal resolution. Therefore, they can serve as a complementarity analysis tool for studying lightning and thunderstorm processes from space. Lightning sensor data, Visible Infrared Imaging Radiometer Suite (VIIRS) derived city light maps, and other geographic databases were combined with the ISS attitude and position data to reverse geolocate each image or frame. An open-source Python toolkit has been developed to assist with this effort. Next, the locations and sizes of all flashes in each frame or image were computed and compared with flash characteristics from all available lightning datasets. This allowed for characterization of cloud features that are below the 4-km and 8-km resolution of ISS-LIS and GLM which may reduce the light that reaches the ISS-LIS or GLM sensor. In the case of video, consecutive frames were overlaid to determine the rate of change of the light escaping cloud top. Characterization of the rate of change in geometry, more generally the radius, of light escaping cloud top was integrated with the NLDN, ISS-LIS and GLM to understand how the peak rate of change and the peak area of each flash aligned with each lightning system in time. Flash features like leaders could be inferred from the video frames as well. Testing is being done to see if leader speeds may be accurately calculated under certain circumstances.

  5. Speech Recognition for A Digital Video Library.

    ERIC Educational Resources Information Center

    Witbrock, Michael J.; Hauptmann, Alexander G.

    1998-01-01

    Production of the meta-data supporting the Informedia Digital Video Library interface is automated using techniques derived from artificial intelligence research. Speech recognition and natural-language processing, information retrieval, and image analysis are applied to produce an interface that helps users locate information and navigate more…

  6. Performance characterization of image and video analysis systems at Siemens Corporate Research

    NASA Astrophysics Data System (ADS)

    Ramesh, Visvanathan; Jolly, Marie-Pierre; Greiffenhagen, Michael

    2000-06-01

    There has been a significant increase in commercial products using imaging analysis techniques to solve real-world problems in diverse fields such as manufacturing, medical imaging, document analysis, transportation and public security, etc. This has been accelerated by various factors: more advanced algorithms, the availability of cheaper sensors, and faster processors. While algorithms continue to improve in performance, a major stumbling block in translating improvements in algorithms to faster deployment of image analysis systems is the lack of characterization of limits of algorithms and how they affect total system performance. The research community has realized the need for performance analysis and there have been significant efforts in the last few years to remedy the situation. Our efforts at SCR have been on statistical modeling and characterization of modules and systems. The emphasis is on both white-box and black box methodologies to evaluate and optimize vision systems. In the first part of this paper we review the literature on performance characterization and then provide an overview of the status of research in performance characterization of image and video understanding systems. The second part of the paper is on performance evaluation of medical image segmentation algorithms. Finally, we highlight some research issues in performance analysis in medical imaging systems.

  7. An automated assay for the assessment of cardiac arrest in fish embryo.

    PubMed

    Puybareau, Elodie; Genest, Diane; Barbeau, Emilie; Léonard, Marc; Talbot, Hugues

    2017-02-01

    Studies on fish embryo models are widely developed in research. They are used in several research fields including drug discovery or environmental toxicology. In this article, we propose an entirely automated assay to detect cardiac arrest in Medaka (Oryzias latipes) based on image analysis. We propose a multi-scale pipeline based on mathematical morphology. Starting from video sequences of entire wells in 24-well plates, we focus on the embryo, detect its heart, and ascertain whether or not the heart is beating based on intensity variation analysis. Our image analysis pipeline only uses commonly available operators. It has a low computational cost, allowing analysis at the same rate as acquisition. From an initial dataset of 3192 videos, 660 were discarded as unusable (20.7%), 655 of them correctly so (99.25%) and only 5 incorrectly so (0.75%). The 2532 remaining videos were used for our test. On these, 45 errors were made, leading to a success rate of 98.23%. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Normalized Temperature Contrast Processing in Flash Infrared Thermography

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2016-01-01

    The paper presents further development in normalized contrast processing of flash infrared thermography method by the author given in US 8,577,120 B1. The method of computing normalized image or pixel intensity contrast, and normalized temperature contrast are provided, including converting one from the other. Methods of assessing emissivity of the object, afterglow heat flux, reflection temperature change and temperature video imaging during flash thermography are provided. Temperature imaging and normalized temperature contrast imaging provide certain advantages over pixel intensity normalized contrast processing by reducing effect of reflected energy in images and measurements, providing better quantitative data. The subject matter for this paper mostly comes from US 9,066,028 B1 by the author. Examples of normalized image processing video images and normalized temperature processing video images are provided. Examples of surface temperature video images, surface temperature rise video images and simple contrast video images area also provided. Temperature video imaging in flash infrared thermography allows better comparison with flash thermography simulation using commercial software which provides temperature video as the output. Temperature imaging also allows easy comparison of surface temperature change to camera temperature sensitivity or noise equivalent temperature difference (NETD) to assess probability of detecting (POD) anomalies.

  9. Intelligent bandwith compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 band width-compressed images are presented. A video tape simulation of the Intelligent Bandwidth Compression system has been produced using a sequence of video input from the data base.

  10. A content analysis of smoking fetish videos on YouTube: regulatory implications for tobacco control.

    PubMed

    Kim, Kyongseok; Paek, Hye-Jin; Lynn, Jordan

    2010-03-01

    This study examined the prevalence, accessibility, and characteristics of eroticized smoking portrayal, also referred to as smoking fetish, on YouTube. The analysis of 200 smoking fetish videos revealed that the smoking fetish videos are prevalent and accessible to adolescents on the website. They featured explicit smoking behavior by sexy, young, and healthy females, with the content corresponding to PG-13 and R movie ratings. We discuss a potential impact of the prosmoking image on youth according to social cognitive theory, and implications for tobacco control.

  11. Effect of clinical information and previous exam execution on observer agreement and reliability in the analysis of hysteroscopic video-recordings.

    PubMed

    Martinho, Margarida Suzel Lopes; da Costa Santos, Cristina Maria Nogueira; Silva Carvalho, João Luís Mendonça; Bernardes, João Francisco Montenegro Andrade Lima

    2018-02-01

    Inter-observer agreement and reliability in hysteroscopic image assessment remain uncertain and the type of factors that may influence it has only been studied in relation to the experience of hysteroscopists. We aim to assess the effect of clinical information and previous exam execution on observer agreement and reliability in the analysis of hysteroscopic video-recordings. Ninety hysteroscopies were video-recorded and randomized into a group without (Group 1) and with clinical information (Group 2). The videos were independently analyzed by three hysteroscopists, regarding lesion location, dimension, and type, as well as decision to perform a biopsy. One of the hysteroscopists had executed all the exams before. Proportions of agreement (PA) and kappa statistics (κ) with 95% confidence intervals (95% CI) were used. In Group 2, there was a higher proportion of a normal diagnosis (p < 0.001) and a lower proportion of biopsies recommended (p = 0.027). Observer agreement and reliability were better in Group 2, with the PA and κ ranging, respectively, from 0.73 (95% CI 0.62, 0.83) and 0.44 (95% CI 0.26, 0.63), for image quality, to 0.94 (95% CI 0.88, 0.99) and 0.85 (95% CI 0.65, 0.95), for the decision to perform a biopsy. Execution of the exams before the analysis of the video-recordings did not significantly affect the results. With clinical information, agreement and reliability in the overall analysis of hysteroscopic video-recordings may reach almost perfect results and this was not significantly affected by the execution of the exams before the analysis. However, there is still uncertainty in the analysis of specific endometrial cavity abnormalities.

  12. Calibration of Action Cameras for Photogrammetric Purposes

    PubMed Central

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-01-01

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution. PMID:25237898

  13. Calibration of action cameras for photogrammetric purposes.

    PubMed

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  14. Using video playbacks to study visual communication in a marine fish, Salaria pavo.

    PubMed

    Gonçalves; Oliveira; Körner; Poschadel; Schlupp

    2000-09-01

    Video playbacks have been successfully applied to the study of visual communication in several groups of animals. However, this technique is controversial as video monitors are designed with the human visual system in mind. Differences between the visual capabilities of humans and other animals will lead to perceptually different interpretations of video images. We simultaneously presented males and females of the peacock blenny, Salaria pavo, with a live conspecific male and an online video image of the same individual. Video images failed to elicit appropriate responses. Males were aggressive towards the live male but not towards video images of the same male. Similarly, females courted only the live male and spent more time near this stimulus. In contrast, females of the gynogenetic poecilid Poecilia formosa showed an equal preference for a live and video image of a P. mexicana male, suggesting a response to live animals as strong as to video images. We discuss differences between the species that may explain their opposite reaction to video images. Copyright 2000 The Association for the Study of Animal Behaviour.

  15. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor.

    PubMed

    Kim, Heegwang; Park, Jinho; Park, Hasil; Paik, Joonki

    2017-12-09

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.

  16. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor

    PubMed Central

    Park, Jinho; Park, Hasil

    2017-01-01

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system. PMID:29232826

  17. Quantitative analysis of tympanic membrane perforation: a simple and reliable method.

    PubMed

    Ibekwe, T S; Adeosun, A A; Nwaorgu, O G

    2009-01-01

    Accurate assessment of the features of tympanic membrane perforation, especially size, site, duration and aetiology, is important, as it enables optimum management. To describe a simple, cheap and effective method of quantitatively analysing tympanic membrane perforations. The system described comprises a video-otoscope (capable of generating still and video images of the tympanic membrane), adapted via a universal serial bus box to a computer screen, with images analysed using the Image J geometrical analysis software package. The reproducibility of results and their correlation with conventional otoscopic methods of estimation were tested statistically with the paired t-test and correlational tests, using the Statistical Package for the Social Sciences version 11 software. The following equation was generated: P/T x 100 per cent = percentage perforation, where P is the area (in pixels2) of the tympanic membrane perforation and T is the total area (in pixels2) for the entire tympanic membrane (including the perforation). Illustrations are shown. Comparison of blinded data on tympanic membrane perforation area obtained independently from assessments by two trained otologists, of comparative years of experience, using the video-otoscopy system described, showed similar findings, with strong correlations devoid of inter-observer error (p = 0.000, r = 1). Comparison with conventional otoscopic assessment also indicated significant correlation, comparing results for two trained otologists, but some inter-observer variation was present (p = 0.000, r = 0.896). Correlation between the two methods for each of the otologists was also highly significant (p = 0.000). A computer-adapted video-otoscope, with images analysed by Image J software, represents a cheap, reliable, technology-driven, clinical method of quantitative analysis of tympanic membrane perforations and injuries.

  18. High-grade video compression of echocardiographic studies: a multicenter validation study of selected motion pictures expert groups (MPEG)-4 algorithms.

    PubMed

    Barbier, Paolo; Alimento, Marina; Berna, Giovanni; Celeste, Fabrizio; Gentile, Francesco; Mantero, Antonio; Montericcio, Vincenzo; Muratori, Manuela

    2007-05-01

    Large files produced by standard compression algorithms slow down spread of digital and tele-echocardiography. We validated echocardiographic video high-grade compression with the new Motion Pictures Expert Groups (MPEG)-4 algorithms with a multicenter study. Seven expert cardiologists blindly scored (5-point scale) 165 uncompressed and compressed 2-dimensional and color Doppler video clips, based on combined diagnostic content and image quality (uncompressed files as references). One digital video and 3 MPEG-4 algorithms (WM9, MV2, and DivX) were used, the latter at 3 compression levels (0%, 35%, and 60%). Compressed file sizes decreased from 12 to 83 MB to 0.03 to 2.3 MB (1:1051-1:26 reduction ratios). Mean SD of differences was 0.81 for intraobserver variability (uncompressed and digital video files). Compared with uncompressed files, only the DivX mean score at 35% (P = .04) and 60% (P = .001) compression was significantly reduced. At subcategory analysis, these differences were still significant for gray-scale and fundamental imaging but not for color or second harmonic tissue imaging. Original image quality, session sequence, compression grade, and bitrate were all independent determinants of mean score. Our study supports use of MPEG-4 algorithms to greatly reduce echocardiographic file sizes, thus facilitating archiving and transmission. Quality evaluation studies should account for the many independent variables that affect image quality grading.

  19. SarcOptiM for ImageJ: high-frequency online sarcomere length computing on stimulated cardiomyocytes.

    PubMed

    Pasqualin, Côme; Gannier, François; Yu, Angèle; Malécot, Claire O; Bredeloux, Pierre; Maupoil, Véronique

    2016-08-01

    Accurate measurement of cardiomyocyte contraction is a critical issue for scientists working on cardiac physiology and physiopathology of diseases implying contraction impairment. Cardiomyocytes contraction can be quantified by measuring sarcomere length, but few tools are available for this, and none is freely distributed. We developed a plug-in (SarcOptiM) for the ImageJ/Fiji image analysis platform developed by the National Institutes of Health. SarcOptiM computes sarcomere length via fast Fourier transform analysis of video frames captured or displayed in ImageJ and thus is not tied to a dedicated video camera. It can work in real time or offline, the latter overcoming rotating motion or displacement-related artifacts. SarcOptiM includes a simulator and video generator of cardiomyocyte contraction. Acquisition parameters, such as pixel size and camera frame rate, were tested with both experimental recordings of rat ventricular cardiomyocytes and synthetic videos. It is freely distributed, and its source code is available. It works under Windows, Mac, or Linux operating systems. The camera speed is the limiting factor, since the algorithm can compute online sarcomere shortening at frame rates >10 kHz. In conclusion, SarcOptiM is a free and validated user-friendly tool for studying cardiomyocyte contraction in all species, including human. Copyright © 2016 the American Physiological Society.

  20. Global-constrained hidden Markov model applied on wireless capsule endoscopy video segmentation

    NASA Astrophysics Data System (ADS)

    Wan, Yiwen; Duraisamy, Prakash; Alam, Mohammad S.; Buckles, Bill

    2012-06-01

    Accurate analysis of wireless capsule endoscopy (WCE) videos is vital but tedious. Automatic image analysis can expedite this task. Video segmentation of WCE into the four parts of the gastrointestinal tract is one way to assist a physician. The segmentation approach described in this paper integrates pattern recognition with statiscal analysis. Iniatially, a support vector machine is applied to classify video frames into four classes using a combination of multiple color and texture features as the feature vector. A Poisson cumulative distribution, for which the parameter depends on the length of segments, models a prior knowledge. A priori knowledge together with inter-frame difference serves as the global constraints driven by the underlying observation of each WCE video, which is fitted by Gaussian distribution to constrain the transition probability of hidden Markov model.Experimental results demonstrated effectiveness of the approach.

  1. Multi-frame super-resolution with quality self-assessment for retinal fundus videos.

    PubMed

    Köhler, Thomas; Brost, Alexander; Mogalle, Katja; Zhang, Qianyi; Köhler, Christiane; Michelson, Georg; Hornegger, Joachim; Tornow, Ralf P

    2014-01-01

    This paper proposes a novel super-resolution framework to reconstruct high-resolution fundus images from multiple low-resolution video frames in retinal fundus imaging. Natural eye movements during an examination are used as a cue for super-resolution in a robust maximum a-posteriori scheme. In order to compensate heterogeneous illumination on the fundus, we integrate retrospective illumination correction for photometric registration to the underlying imaging model. Our method utilizes quality self-assessment to provide objective quality scores for reconstructed images as well as to select regularization parameters automatically. In our evaluation on real data acquired from six human subjects with a low-cost video camera, the proposed method achieved considerable enhancements of low-resolution frames and improved noise and sharpness characteristics by 74%. In terms of image analysis, we demonstrate the importance of our method for the improvement of automatic blood vessel segmentation as an example application, where the sensitivity was increased by 13% using super-resolution reconstruction.

  2. Quantitative assessment of human motion using video motion analysis

    NASA Technical Reports Server (NTRS)

    Probe, John D.

    1990-01-01

    In the study of the dynamics and kinematics of the human body, a wide variety of technologies was developed. Photogrammetric techniques are well documented and are known to provide reliable positional data from recorded images. Often these techniques are used in conjunction with cinematography and videography for analysis of planar motion, and to a lesser degree three-dimensional motion. Cinematography has been the most widely used medium for movement analysis. Excessive operating costs and the lag time required for film development coupled with recent advances in video technology have allowed video based motion analysis systems to emerge as a cost effective method of collecting and analyzing human movement. The Anthropometric and Biomechanics Lab at Johnson Space Center utilizes the video based Ariel Performance Analysis System to develop data on shirt-sleeved and space-suited human performance in order to plan efficient on orbit intravehicular and extravehicular activities. The system is described.

  3. Toward brain correlates of natural behavior: fMRI during violent video games.

    PubMed

    Mathiak, Klaus; Weber, René

    2006-12-01

    Modern video games represent highly advanced virtual reality simulations and often contain virtual violence. In a significant amount of young males, playing video games is a quotidian activity, making it an almost natural behavior. Recordings of brain activation with functional magnetic resonance imaging (fMRI) during gameplay may reflect neuronal correlates of real-life behavior. We recorded 13 experienced gamers (18-26 years; average 14 hrs/week playing) while playing a violent first-person shooter game (a violent computer game played in self-perspective) by means of distortion and dephasing reduced fMRI (3 T; single-shot triple-echo echo-planar imaging [EPI]). Content analysis of the video and sound with 100 ms time resolution achieved relevant behavioral variables. These variables explained significant signal variance across large distributed networks. Occurrence of violent scenes revealed significant neuronal correlates in an event-related design. Activation of dorsal and deactivation of rostral anterior cingulate and amygdala characterized the mid-frontal pattern related to virtual violence. Statistics and effect sizes can be considered large at these areas. Optimized imaging strategies allowed for single-subject and for single-trial analysis with good image quality at basal brain structures. We propose that virtual environments can be used to study neuronal processes involved in semi-naturalistic behavior as determined by content analysis. Importantly, the activation pattern reflects brain-environment interactions rather than stimulus responses as observed in classical experimental designs. We relate our findings to the general discussion on social effects of playing first-person shooter games. (c) 2006 Wiley-Liss, Inc.

  4. How to Determine the Centre of Mass of Bodies from Image Modelling

    ERIC Educational Resources Information Center

    Dias, Marco Adriano; Carvalho, Paulo Simeão; Rodrigues, Marcelo

    2016-01-01

    Image modelling is a recent technique in physics education that includes digital tools for image treatment and analysis, such as digital stroboscopic photography (DSP) and video analysis software. It is commonly used to analyse the motion of objects. In this work we show how to determine the position of the centre of mass (CM) of objects with…

  5. Automatic target recognition apparatus and method

    DOEpatents

    Baumgart, Chris W.; Ciarcia, Christopher A.

    2000-01-01

    An automatic target recognition apparatus (10) is provided, having a video camera/digitizer (12) for producing a digitized image signal (20) representing an image containing therein objects which objects are to be recognized if they meet predefined criteria. The digitized image signal (20) is processed within a video analysis subroutine (22) residing in a computer (14) in a plurality of parallel analysis chains such that the objects are presumed to be lighter in shading than the background in the image in three of the chains and further such that the objects are presumed to be darker than the background in the other three chains. In two of the chains the objects are defined by surface texture analysis using texture filter operations. In another two of the chains the objects are defined by background subtraction operations. In yet another two of the chains the objects are defined by edge enhancement processes. In each of the analysis chains a calculation operation independently determines an error factor relating to the probability that the objects are of the type which should be recognized, and a probability calculation operation combines the results of the analysis chains.

  6. Quantitative assessment of human motion using video motion analysis

    NASA Technical Reports Server (NTRS)

    Probe, John D.

    1993-01-01

    In the study of the dynamics and kinematics of the human body a wide variety of technologies has been developed. Photogrammetric techniques are well documented and are known to provide reliable positional data from recorded images. Often these techniques are used in conjunction with cinematography and videography for analysis of planar motion, and to a lesser degree three-dimensional motion. Cinematography has been the most widely used medium for movement analysis. Excessive operating costs and the lag time required for film development, coupled with recent advances in video technology, have allowed video based motion analysis systems to emerge as a cost effective method of collecting and analyzing human movement. The Anthropometric and Biomechanics Lab at Johnson Space Center utilizes the video based Ariel Performance Analysis System (APAS) to develop data on shirtsleeved and space-suited human performance in order to plan efficient on-orbit intravehicular and extravehicular activities. APAS is a fully integrated system of hardware and software for biomechanics and the analysis of human performance and generalized motion measurement. Major components of the complete system include the video system, the AT compatible computer, and the proprietary software.

  7. A spatiotemporal decomposition strategy for personal home video management

    NASA Astrophysics Data System (ADS)

    Yi, Haoran; Kozintsev, Igor; Polito, Marzia; Wu, Yi; Bouguet, Jean-Yves; Nefian, Ara; Dulong, Carole

    2007-01-01

    With the advent and proliferation of low cost and high performance digital video recorder devices, an increasing number of personal home video clips are recorded and stored by the consumers. Compared to image data, video data is lager in size and richer in multimedia content. Efficient access to video content is expected to be more challenging than image mining. Previously, we have developed a content-based image retrieval system and the benchmarking framework for personal images. In this paper, we extend our personal image retrieval system to include personal home video clips. A possible initial solution to video mining is to represent video clips by a set of key frames extracted from them thus converting the problem into an image search one. Here we report that a careful selection of key frames may improve the retrieval accuracy. However, because video also has temporal dimension, its key frame representation is inherently limited. The use of temporal information can give us better representation for video content at semantic object and concept levels than image-only based representation. In this paper we propose a bottom-up framework to combine interest point tracking, image segmentation and motion-shape factorization to decompose the video into spatiotemporal regions. We show an example application of activity concept detection using the trajectories extracted from the spatio-temporal regions. The proposed approach shows good potential for concise representation and indexing of objects and their motion in real-life consumer video.

  8. Video Toroid Cavity Imager

    DOEpatents

    Gerald, II, Rex E.; Sanchez, Jairo; Rathke, Jerome W.

    2004-08-10

    A video toroid cavity imager for in situ measurement of electrochemical properties of an electrolytic material sample includes a cylindrical toroid cavity resonator containing the sample and employs NMR and video imaging for providing high-resolution spectral and visual information of molecular characteristics of the sample on a real-time basis. A large magnetic field is applied to the sample under controlled temperature and pressure conditions to simultaneously provide NMR spectroscopy and video imaging capabilities for investigating electrochemical transformations of materials or the evolution of long-range molecular aggregation during cooling of hydrocarbon melts. The video toroid cavity imager includes a miniature commercial video camera with an adjustable lens, a modified compression coin cell imager with a fiat circular principal detector element, and a sample mounted on a transparent circular glass disk, and provides NMR information as well as a video image of a sample, such as a polymer film, with micrometer resolution.

  9. Image processing for improved eye-tracking accuracy

    NASA Technical Reports Server (NTRS)

    Mulligan, J. B.; Watson, A. B. (Principal Investigator)

    1997-01-01

    Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.

  10. Automated frame selection process for high-resolution microendoscopy

    NASA Astrophysics Data System (ADS)

    Ishijima, Ayumu; Schwarz, Richard A.; Shin, Dongsuk; Mondrik, Sharon; Vigneswaran, Nadarajah; Gillenwater, Ann M.; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca

    2015-04-01

    We developed an automated frame selection algorithm for high-resolution microendoscopy video sequences. The algorithm rapidly selects a representative frame with minimal motion artifact from a short video sequence, enabling fully automated image analysis at the point-of-care. The algorithm was evaluated by quantitative comparison of diagnostically relevant image features and diagnostic classification results obtained using automated frame selection versus manual frame selection. A data set consisting of video sequences collected in vivo from 100 oral sites and 167 esophageal sites was used in the analysis. The area under the receiver operating characteristic curve was 0.78 (automated selection) versus 0.82 (manual selection) for oral sites, and 0.93 (automated selection) versus 0.92 (manual selection) for esophageal sites. The implementation of fully automated high-resolution microendoscopy at the point-of-care has the potential to reduce the number of biopsies needed for accurate diagnosis of precancer and cancer in low-resource settings where there may be limited infrastructure and personnel for standard histologic analysis.

  11. Software for Real-Time Analysis of Subsonic Test Shot Accuracy

    DTIC Science & Technology

    2014-03-01

    used the C++ programming language, the Open Source Computer Vision ( OpenCV ®) software library, and Microsoft Windows® Application Programming...video for comparison through OpenCV image analysis tools. Based on the comparison, the software then computed the coordinates of each shot relative to...DWB researchers wanted to use the Open Source Computer Vision ( OpenCV ) software library for capturing and analyzing frames of video. OpenCV contains

  12. TRECVID: the utility of a content-based video retrieval evaluation

    NASA Astrophysics Data System (ADS)

    Hauptmann, Alexander G.

    2006-01-01

    TRECVID, an annual retrieval evaluation benchmark organized by NIST, encourages research in information retrieval from digital video. TRECVID benchmarking covers both interactive and manual searching by end users, as well as the benchmarking of some supporting technologies including shot boundary detection, extraction of semantic features, and the automatic segmentation of TV news broadcasts. Evaluations done in the context of the TRECVID benchmarks show that generally, speech transcripts and annotations provide the single most important clue for successful retrieval. However, automatically finding the individual images is still a tremendous and unsolved challenge. The evaluations repeatedly found that none of the multimedia analysis and retrieval techniques provide a significant benefit over retrieval using only textual information such as from automatic speech recognition transcripts or closed captions. In interactive systems, we do find significant differences among the top systems, indicating that interfaces can make a huge difference for effective video/image search. For interactive tasks efficient interfaces require few key clicks, but display large numbers of images for visual inspection by the user. The text search finds the right context region in the video in general, but to select specific relevant images we need good interfaces to easily browse the storyboard pictures. In general, TRECVID has motivated the video retrieval community to be honest about what we don't know how to do well (sometimes through painful failures), and has focused us to work on the actual task of video retrieval, as opposed to flashy demos based on technological capabilities.

  13. Review of passive-blind detection in digital video forgery based on sensing and imaging techniques

    NASA Astrophysics Data System (ADS)

    Tao, Junjie; Jia, Lili; You, Ying

    2016-01-01

    Advances in digital video compression and IP communication technologies raised new issues and challenges concerning the integrity and authenticity of surveillance videos. It is so important that the system should ensure that once recorded, the video cannot be altered; ensuring the audit trail is intact for evidential purposes. This paper gives an overview of passive techniques of Digital Video Forensics which are based on intrinsic fingerprints inherent in digital surveillance videos. In this paper, we performed a thorough research of literatures relevant to video manipulation detection methods which accomplish blind authentications without referring to any auxiliary information. We presents review of various existing methods in literature, and much more work is needed to be done in this field of video forensics based on video data analysis and observation of the surveillance systems.

  14. Video content parsing based on combined audio and visual information

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Kuo, C.-C. Jay

    1999-08-01

    While previous research on audiovisual data segmentation and indexing primarily focuses on the pictorial part, significant clues contained in the accompanying audio flow are often ignored. A fully functional system for video content parsing can be achieved more successfully through a proper combination of audio and visual information. By investigating the data structure of different video types, we present tools for both audio and visual content analysis and a scheme for video segmentation and annotation in this research. In the proposed system, video data are segmented into audio scenes and visual shots by detecting abrupt changes in audio and visual features, respectively. Then, the audio scene is categorized and indexed as one of the basic audio types while a visual shot is presented by keyframes and associate image features. An index table is then generated automatically for each video clip based on the integration of outputs from audio and visual analysis. It is shown that the proposed system provides satisfying video indexing results.

  15. EVA: laparoscopic instrument tracking based on Endoscopic Video Analysis for psychomotor skills assessment.

    PubMed

    Oropesa, Ignacio; Sánchez-González, Patricia; Chmarra, Magdalena K; Lamata, Pablo; Fernández, Alvaro; Sánchez-Margallo, Juan A; Jansen, Frank Willem; Dankelman, Jenny; Sánchez-Margallo, Francisco M; Gómez, Enrique J

    2013-03-01

    The EVA (Endoscopic Video Analysis) tracking system is a new system for extracting motions of laparoscopic instruments based on nonobtrusive video tracking. The feasibility of using EVA in laparoscopic settings has been tested in a box trainer setup. EVA makes use of an algorithm that employs information of the laparoscopic instrument's shaft edges in the image, the instrument's insertion point, and the camera's optical center to track the three-dimensional position of the instrument tip. A validation study of EVA comprised a comparison of the measurements achieved with EVA and the TrEndo tracking system. To this end, 42 participants (16 novices, 22 residents, and 4 experts) were asked to perform a peg transfer task in a box trainer. Ten motion-based metrics were used to assess their performance. Construct validation of the EVA has been obtained for seven motion-based metrics. Concurrent validation revealed that there is a strong correlation between the results obtained by EVA and the TrEndo for metrics, such as path length (ρ = 0.97), average speed (ρ = 0.94), or economy of volume (ρ = 0.85), proving the viability of EVA. EVA has been successfully validated in a box trainer setup, showing the potential of endoscopic video analysis to assess laparoscopic psychomotor skills. The results encourage further implementation of video tracking in training setups and image-guided surgery.

  16. Keyhole imaging method for dynamic objects behind the occlusion area

    NASA Astrophysics Data System (ADS)

    Hao, Conghui; Chen, Xi; Dong, Liquan; Zhao, Yuejin; Liu, Ming; Kong, Lingqin; Hui, Mei; Liu, Xiaohua; Wu, Hong

    2018-01-01

    A method of keyhole imaging based on camera array is realized to obtain the video image behind a keyhole in shielded space at a relatively long distance. We get the multi-angle video images by using a 2×2 CCD camera array to take the images behind the keyhole in four directions. The multi-angle video images are saved in the form of frame sequences. This paper presents a method of video frame alignment. In order to remove the non-target area outside the aperture, we use the canny operator and morphological method to realize the edge detection of images and fill the images. The image stitching of four images is accomplished on the basis of the image stitching algorithm of two images. In the image stitching algorithm of two images, the SIFT method is adopted to accomplish the initial matching of images, and then the RANSAC algorithm is applied to eliminate the wrong matching points and to obtain a homography matrix. A method of optimizing transformation matrix is proposed in this paper. Finally, the video image with larger field of view behind the keyhole can be synthesized with image frame sequence in which every single frame is stitched. The results show that the screen of the video is clear and natural, the brightness transition is smooth. There is no obvious artificial stitching marks in the video, and it can be applied in different engineering environment .

  17. Video Extrapolation Method Based on Time-Varying Energy Optimization and CIP.

    PubMed

    Sakaino, Hidetomo

    2016-09-01

    Video extrapolation/prediction methods are often used to synthesize new videos from images. For fluid-like images and dynamic textures as well as moving rigid objects, most state-of-the-art video extrapolation methods use non-physics-based models that learn orthogonal bases from a number of images but at high computation cost. Unfortunately, data truncation can cause image degradation, i.e., blur, artifact, and insufficient motion changes. To extrapolate videos that more strictly follow physical rules, this paper proposes a physics-based method that needs only a few images and is truncation-free. We utilize physics-based equations with image intensity and velocity: optical flow, Navier-Stokes, continuity, and advection equations. These allow us to use partial difference equations to deal with the local image feature changes. Image degradation during extrapolation is minimized by updating model parameters, where a novel time-varying energy balancer model that uses energy based image features, i.e., texture, velocity, and edge. Moreover, the advection equation is discretized by high-order constrained interpolation profile for lower quantization error than can be achieved by the previous finite difference method in long-term videos. Experiments show that the proposed energy based video extrapolation method outperforms the state-of-the-art video extrapolation methods in terms of image quality and computation cost.

  18. Text Detection, Tracking and Recognition in Video: A Comprehensive Survey.

    PubMed

    Yin, Xu-Cheng; Zuo, Ze-Yu; Tian, Shu; Liu, Cheng-Lin

    2016-04-14

    Intelligent analysis of video data is currently in wide demand because video is a major source of sensory data in our lives. Text is a prominent and direct source of information in video, while recent surveys of text detection and recognition in imagery [1], [2] focus mainly on text extraction from scene images. Here, this paper presents a comprehensive survey of text detection, tracking and recognition in video with three major contributions. First, a generic framework is proposed for video text extraction that uniformly describes detection, tracking, recognition, and their relations and interactions. Second, within this framework, a variety of methods, systems and evaluation protocols of video text extraction are summarized, compared, and analyzed. Existing text tracking techniques, tracking based detection and recognition techniques are specifically highlighted. Third, related applications, prominent challenges, and future directions for video text extraction (especially from scene videos and web videos) are also thoroughly discussed.

  19. Video-based face recognition via convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Bao, Tianlong; Ding, Chunhui; Karmoshi, Saleem; Zhu, Ming

    2017-06-01

    Face recognition has been widely studied recently while video-based face recognition still remains a challenging task because of the low quality and large intra-class variation of video captured face images. In this paper, we focus on two scenarios of video-based face recognition: 1)Still-to-Video(S2V) face recognition, i.e., querying a still face image against a gallery of video sequences; 2)Video-to-Still(V2S) face recognition, in contrast to S2V scenario. A novel method was proposed in this paper to transfer still and video face images to an Euclidean space by a carefully designed convolutional neural network, then Euclidean metrics are used to measure the distance between still and video images. Identities of still and video images that group as pairs are used as supervision. In the training stage, a joint loss function that measures the Euclidean distance between the predicted features of training pairs and expanding vectors of still images is optimized to minimize the intra-class variation while the inter-class variation is guaranteed due to the large margin of still images. Transferred features are finally learned via the designed convolutional neural network. Experiments are performed on COX face dataset. Experimental results show that our method achieves reliable performance compared with other state-of-the-art methods.

  20. Wireless live streaming video of laparoscopic surgery: a bandwidth analysis for handheld computers.

    PubMed

    Gandsas, Alex; McIntire, Katherine; George, Ivan M; Witzke, Wayne; Hoskins, James D; Park, Adrian

    2002-01-01

    Over the last six years, streaming media has emerged as a powerful tool for delivering multimedia content over networks. Concurrently, wireless technology has evolved, freeing users from desktop boundaries and wired infrastructures. At the University of Kentucky Medical Center, we have integrated these technologies to develop a system that can wirelessly transmit live surgery from the operating room to a handheld computer. This study establishes the feasibility of using our system to view surgeries and describes the effect of bandwidth on image quality. A live laparoscopic ventral hernia repair was transmitted to a single handheld computer using five encoding speeds at a constant frame rate, and the quality of the resulting streaming images was evaluated. No video images were rendered when video data were encoded at 28.8 kilobytes per second (Kbps), the slowest encoding bitrate studied. The highest quality images were rendered at encoding speeds greater than or equal to 150 Kbps. Of note, a 15 second transmission delay was experienced using all four encoding schemes that rendered video images. We believe that the wireless transmission of streaming video to handheld computers has tremendous potential to enhance surgical education. For medical students and residents, the ability to view live surgeries, lectures, courses and seminars on handheld computers means a larger number of learning opportunities. In addition, we envision that wireless enabled devices may be used to telemonitor surgical procedures. However, bandwidth availability and streaming delay are major issues that must be addressed before wireless telementoring becomes a reality.

  1. VASIR: An Open-Source Research Platform for Advanced Iris Recognition Technologies.

    PubMed

    Lee, Yooyoung; Micheals, Ross J; Filliben, James J; Phillips, P Jonathon

    2013-01-01

    The performance of iris recognition systems is frequently affected by input image quality, which in turn is vulnerable to less-than-optimal conditions due to illuminations, environments, and subject characteristics (e.g., distance, movement, face/body visibility, blinking, etc.). VASIR (Video-based Automatic System for Iris Recognition) is a state-of-the-art NIST-developed iris recognition software platform designed to systematically address these vulnerabilities. We developed VASIR as a research tool that will not only provide a reference (to assess the relative performance of alternative algorithms) for the biometrics community, but will also advance (via this new emerging iris recognition paradigm) NIST's measurement mission. VASIR is designed to accommodate both ideal (e.g., classical still images) and less-than-ideal images (e.g., face-visible videos). VASIR has three primary modules: 1) Image Acquisition 2) Video Processing, and 3) Iris Recognition. Each module consists of several sub-components that have been optimized by use of rigorous orthogonal experiment design and analysis techniques. We evaluated VASIR performance using the MBGC (Multiple Biometric Grand Challenge) NIR (Near-Infrared) face-visible video dataset and the ICE (Iris Challenge Evaluation) 2005 still-based dataset. The results showed that even though VASIR was primarily developed and optimized for the less-constrained video case, it still achieved high verification rates for the traditional still-image case. For this reason, VASIR may be used as an effective baseline for the biometrics community to evaluate their algorithm performance, and thus serves as a valuable research platform.

  2. VASIR: An Open-Source Research Platform for Advanced Iris Recognition Technologies

    PubMed Central

    Lee, Yooyoung; Micheals, Ross J; Filliben, James J; Phillips, P Jonathon

    2013-01-01

    The performance of iris recognition systems is frequently affected by input image quality, which in turn is vulnerable to less-than-optimal conditions due to illuminations, environments, and subject characteristics (e.g., distance, movement, face/body visibility, blinking, etc.). VASIR (Video-based Automatic System for Iris Recognition) is a state-of-the-art NIST-developed iris recognition software platform designed to systematically address these vulnerabilities. We developed VASIR as a research tool that will not only provide a reference (to assess the relative performance of alternative algorithms) for the biometrics community, but will also advance (via this new emerging iris recognition paradigm) NIST’s measurement mission. VASIR is designed to accommodate both ideal (e.g., classical still images) and less-than-ideal images (e.g., face-visible videos). VASIR has three primary modules: 1) Image Acquisition 2) Video Processing, and 3) Iris Recognition. Each module consists of several sub-components that have been optimized by use of rigorous orthogonal experiment design and analysis techniques. We evaluated VASIR performance using the MBGC (Multiple Biometric Grand Challenge) NIR (Near-Infrared) face-visible video dataset and the ICE (Iris Challenge Evaluation) 2005 still-based dataset. The results showed that even though VASIR was primarily developed and optimized for the less-constrained video case, it still achieved high verification rates for the traditional still-image case. For this reason, VASIR may be used as an effective baseline for the biometrics community to evaluate their algorithm performance, and thus serves as a valuable research platform. PMID:26401431

  3. Localizing Target Structures in Ultrasound Video

    PubMed Central

    Kwitt, R.; Vasconcelos, N.; Razzaque, S.; Aylward, S.

    2013-01-01

    The problem of localizing specific anatomic structures using ultrasound (US) video is considered. This involves automatically determining when an US probe is acquiring images of a previously defined object of interest, during the course of an US examination. Localization using US is motivated by the increased availability of portable, low-cost US probes, which inspire applications where inexperienced personnel and even first-time users acquire US data that is then sent to experts for further assessment. This process is of particular interest for routine examinations in underserved populations as well as for patient triage after natural disasters and large-scale accidents, where experts may be in short supply. The proposed localization approach is motivated by research in the area of dynamic texture analysis and leverages several recent advances in the field of activity recognition. For evaluation, we introduce an annotated and publicly available database of US video, acquired on three phantoms. Several experiments reveal the challenges of applying video analysis approaches to US images and demonstrate that good localization performance is possible with the proposed solution. PMID:23746488

  4. Automatic treatment of flight test images using modern tools: SAAB and Aeritalia joint approach

    NASA Astrophysics Data System (ADS)

    Kaelldahl, A.; Duranti, P.

    The use of onboard cine cameras, as well as that of on ground cinetheodolites, is very popular in flight tests. The high resolution of film and the high frame rate of cinecameras are still not exceeded by video technology. Video technology can successfully enter the flight test scenario once the availability of solid-state optical sensors dramatically reduces the dimensions, and weight of TV cameras, thus allowing to locate them in positions compatible with space or operational limitations (e.g., HUD cameras). A proper combination of cine and video cameras is the typical solution for a complex flight test program. The output of such devices is very helpful in many flight areas. Several sucessful applications of this technology are summarized. Analysis of the large amount of data produced (frames of images) requires a very long time. The analysis is normally carried out manually. In order to improve the situation, in the last few years, several flight test centers have devoted their attention to possible techniques which allow for quicker and more effective image treatment.

  5. Enhanced Video-Oculography System

    NASA Technical Reports Server (NTRS)

    Moore, Steven T.; MacDougall, Hamish G.

    2009-01-01

    A previously developed video-oculography system has been enhanced for use in measuring vestibulo-ocular reflexes of a human subject in a centrifuge, motor vehicle, or other setting. The system as previously developed included a lightweight digital video camera mounted on goggles. The left eye was illuminated by an infrared light-emitting diode via a dichroic mirror, and the camera captured images of the left eye in infrared light. To extract eye-movement data, the digitized video images were processed by software running in a laptop computer. Eye movements were calibrated by having the subject view a target pattern, fixed with respect to the subject s head, generated by a goggle-mounted laser with a diffraction grating. The system as enhanced includes a second camera for imaging the scene from the subject s perspective, and two inertial measurement units (IMUs) for measuring linear accelerations and rates of rotation for computing head movements. One IMU is mounted on the goggles, the other on the centrifuge or vehicle frame. All eye-movement and head-motion data are time-stamped. In addition, the subject s point of regard is superimposed on each scene image to enable analysis of patterns of gaze in real time.

  6. Informative-frame filtering in endoscopy videos

    NASA Astrophysics Data System (ADS)

    An, Yong Hwan; Hwang, Sae; Oh, JungHwan; Lee, JeongKyu; Tavanapong, Wallapak; de Groen, Piet C.; Wong, Johnny

    2005-04-01

    Advances in video technology are being incorporated into today"s healthcare practice. For example, colonoscopy is an important screening tool for colorectal cancer. Colonoscopy allows for the inspection of the entire colon and provides the ability to perform a number of therapeutic operations during a single procedure. During a colonoscopic procedure, a tiny video camera at the tip of the endoscope generates a video signal of the internal mucosa of the colon. The video data are displayed on a monitor for real-time analysis by the endoscopist. Other endoscopic procedures include upper gastrointestinal endoscopy, enteroscopy, bronchoscopy, cystoscopy, and laparoscopy. However, a significant number of out-of-focus frames are included in this type of videos since current endoscopes are equipped with a single, wide-angle lens that cannot be focused. The out-of-focus frames do not hold any useful information. To reduce the burdens of the further processes such as computer-aided image processing or human expert"s examinations, these frames need to be removed. We call an out-of-focus frame as non-informative frame and an in-focus frame as informative frame. We propose a new technique to classify the video frames into two classes, informative and non-informative frames using a combination of Discrete Fourier Transform (DFT), Texture Analysis, and K-Means Clustering. The proposed technique can evaluate the frames without any reference image, and does not need any predefined threshold value. Our experimental studies indicate that it achieves over 96% of four different performance metrics (i.e. precision, sensitivity, specificity, and accuracy).

  7. Video analysis of the biomechanics of a bicycle accident resulting in significant facial fractures.

    PubMed

    Syed, Shameer H; Willing, Ryan; Jenkyn, Thomas R; Yazdani, Arjang

    2013-11-01

    This study aimed to use video analysis techniques to determine the velocity, impact force, angle of impact, and impulse to fracture involved in a video-recorded bicycle accident resulting in facial fractures. Computed tomographic images of the resulting facial injury are presented for correlation with data and calculations. To our knowledge, such an analysis of an actual recorded trauma has not been reported in the literature. A video recording of the accident was split into frames and analyzed using an image editing program. Measurements of velocity and angle of impact were obtained from this analysis, and the force of impact and impulse were calculated using the inverse dynamic method with connected rigid body segments. These results were then correlated with the actual fracture pattern found on computed tomographic imaging of the subject's face. There was an impact velocity of 6.25 m/s, impact angles of 14 and 6.3 degrees of neck extension and axial rotation, respectively, an impact force of 1910.4 N, and an impulse to fracture of 47.8 Ns. These physical parameters resulted in clinically significant bilateral mid-facial Le Fort II and III pattern fractures. These data confer further understanding of the biomechanics of bicycle-related accidents by correlating an actual clinical outcome with the kinematic and dynamic parameters involved in the accident itself and yielding a concrete evidence of the velocity, force, and impulse necessary to cause clinically significant facial trauma. These findings can aid in the design of protective equipment for bicycle riders to help avoid this type of injury.

  8. Mass-storage management for distributed image/video archives

    NASA Astrophysics Data System (ADS)

    Franchi, Santina; Guarda, Roberto; Prampolini, Franco

    1993-04-01

    The realization of image/video database requires a specific design for both database structures and mass storage management. This issue has addressed the project of the digital image/video database system that has been designed at IBM SEMEA Scientific & Technical Solution Center. Proper database structures have been defined to catalog image/video coding technique with the related parameters, and the description of image/video contents. User workstations and servers are distributed along a local area network. Image/video files are not managed directly by the DBMS server. Because of their wide size, they are stored outside the database on network devices. The database contains the pointers to the image/video files and the description of the storage devices. The system can use different kinds of storage media, organized in a hierarchical structure. Three levels of functions are available to manage the storage resources. The functions of the lower level provide media management. They allow it to catalog devices and to modify device status and device network location. The medium level manages image/video files on a physical basis. It manages file migration between high capacity media and low access time media. The functions of the upper level work on image/video file on a logical basis, as they archive, move and copy image/video data selected by user defined queries. These functions are used to support the implementation of a storage management strategy. The database information about characteristics of both storage devices and coding techniques are used by the third level functions to fit delivery/visualization requirements and to reduce archiving costs.

  9. Accuracy of complete-arch model using an intraoral video scanner: An in vitro study.

    PubMed

    Jeong, Il-Do; Lee, Jae-Jun; Jeon, Jin-Hun; Kim, Ji-Hwan; Kim, Hae-Young; Kim, Woong-Chul

    2016-06-01

    Information on the accuracy of intraoral video scanners for long-span areas is limited. The purpose of this in vitro study was to evaluate and compare the trueness and precision of an intraoral video scanner, an intraoral still image scanner, and a blue-light scanner for the production of digital impressions. Reference scan data were obtained by scanning a complete-arch model. An identical model was scanned 8 times using an intraoral video scanner (CEREC Omnicam; Sirona) and an intraoral still image scanner (CEREC Bluecam; Sirona), and stone casts made from conventional impressions of the same model were scanned 8 times with a blue-light scanner as a control (Identica Blue; Medit). Accuracy consists of trueness (the extent to which the scan data differ from the reference scan) and precision (the similarity of the data from multiple scans). To evaluate precision, 8 scans were superimposed using 3-dimensional analysis software; the reference scan data were then superimposed to determine the trueness. Differences were analyzed using 1-way ANOVA and post hoc Tukey HSD tests (α=.05). Trueness in the video scanner group was not significantly different from that in the control group. However, the video scanner group showed significantly lower values than those of the still image scanner group for all variables (P<.05), except in tolerance range. The root mean square, standard deviations, and mean negative precision values for the video scanner group were significantly higher than those for the other groups (P<.05). Digital impressions obtained by the intraoral video scanner showed better accuracy for long-span areas than those captured by the still image scanner. However, the video scanner was less accurate than the laboratory scanner. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  10. Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications

    NASA Technical Reports Server (NTRS)

    Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.

  11. A Video Method to Study Drosophila Sleep

    PubMed Central

    Zimmerman, John E.; Raizen, David M.; Maycock, Matthew H.; Maislin, Greg; Pack, Allan I.

    2008-01-01

    Study Objectives: To use video to determine the accuracy of the infrared beam-splitting method for measuring sleep in Drosophila and to determine the effect of time of day, sex, genotype, and age on sleep measurements. Design: A digital image analysis method based on frame subtraction principle was developed to distinguish a quiescent from a moving fly. Data obtained using this method were compared with data obtained using the Drosophila Activity Monitoring System (DAMS). The location of the fly was identified based on its centroid location in the subtracted images. Measurements and Results: The error associated with the identification of total sleep using DAMS ranged from 7% to 95% and depended on genotype, sex, age, and time of day. The degree of the total sleep error was dependent on genotype during the daytime (P < 0.001) and was dependent on age during both the daytime and the nighttime (P < 0.001 for both). The DAMS method overestimated sleep bout duration during both the day and night, and the degree of these errors was genotype dependent (P < 0.001). Brief movements that occur during sleep bouts can be accurately identified using video. Both video and DAMS detected a homeostatic response to sleep deprivation. Conclusions: Video digital analysis is more accurate than DAMS in fly sleep measurements. In particular, conclusions drawn from DAMS measurements regarding daytime sleep and sleep architecture should be made with caution. Video analysis also permits the assessment of fly position and brief movements during sleep. Citation: Zimmerman JE; Raizen DM; Maycock MH; Maislin G; Pack AI. A video method to study drosophila sleep. SLEEP 2008;31(11):1587–1598. PMID:19014079

  12. LBP based detection of intestinal motility in WCE images

    NASA Astrophysics Data System (ADS)

    Gallo, Giovanni; Granata, Eliana

    2011-03-01

    In this research study, a system to support medical analysis of intestinal contractions by processing WCE images is presented. Small intestine contractions are among the motility patterns which reveal many gastrointestinal disorders, such as functional dyspepsia, paralytic ileus, irritable bowel syndrome, bacterial overgrowth. The images have been obtained using the Wireless Capsule Endoscopy (WCE) technique, a patented, video colorimaging disposable capsule. Manual annotation of contractions is an elaborating task, since the recording device of the capsule stores about 50,000 images and contractions might represent only the 1% of the whole video. In this paper we propose the use of Local Binary Pattern (LBP) combined with the powerful textons statistics to find the frames of the video related to contractions. We achieve a sensitivity of about 80% and a specificity of about 99%. The achieved high detection accuracy of the proposed system has provided thus an indication that such intelligent schemes could be used as a supplementary diagnostic tool in endoscopy.

  13. Mode extraction on wind turbine blades via phase-based video motion estimation

    NASA Astrophysics Data System (ADS)

    Sarrafi, Aral; Poozesh, Peyman; Niezrecki, Christopher; Mao, Zhu

    2017-04-01

    In recent years, image processing techniques are being applied more often for structural dynamics identification, characterization, and structural health monitoring. Although as a non-contact and full-field measurement method, image processing still has a long way to go to outperform other conventional sensing instruments (i.e. accelerometers, strain gauges, laser vibrometers, etc.,). However, the technologies associated with image processing are developing rapidly and gaining more attention in a variety of engineering applications including structural dynamics identification and modal analysis. Among numerous motion estimation and image-processing methods, phase-based video motion estimation is considered as one of the most efficient methods regarding computation consumption and noise robustness. In this paper, phase-based video motion estimation is adopted for structural dynamics characterization on a 2.3-meter long Skystream wind turbine blade, and the modal parameters (natural frequencies, operating deflection shapes) are extracted. Phase-based video processing adopted in this paper provides reliable full-field 2-D motion information, which is beneficial for manufacturing certification and model updating at the design stage. The phase-based video motion estimation approach is demonstrated through processing data on a full-scale commercial structure (i.e. a wind turbine blade) with complex geometry and properties, and the results obtained have a good correlation with the modal parameters extracted from accelerometer measurements, especially for the first four bending modes, which have significant importance in blade characterization.

  14. Abnormal Image Detection in Endoscopy Videos Using a Filter Bank and Local Binary Patterns

    PubMed Central

    Nawarathna, Ruwan; Oh, JungHwan; Muthukudage, Jayantha; Tavanapong, Wallapak; Wong, Johnny; de Groen, Piet C.; Tang, Shou Jiang

    2014-01-01

    Finding mucosal abnormalities (e.g., erythema, blood, ulcer, erosion, and polyp) is one of the most essential tasks during endoscopy video review. Since these abnormalities typically appear in a small number of frames (around 5% of the total frame number), automated detection of frames with an abnormality can save physician’s time significantly. In this paper, we propose a new multi-texture analysis method that effectively discerns images showing mucosal abnormalities from the ones without any abnormality since most abnormalities in endoscopy images have textures that are clearly distinguishable from normal textures using an advanced image texture analysis method. The method uses a “texton histogram” of an image block as features. The histogram captures the distribution of different “textons” representing various textures in an endoscopy image. The textons are representative response vectors of an application of a combination of Leung and Malik (LM) filter bank (i.e., a set of image filters) and a set of Local Binary Patterns on the image. Our experimental results indicate that the proposed method achieves 92% recall and 91.8% specificity on wireless capsule endoscopy (WCE) images and 91% recall and 90.8% specificity on colonoscopy images. PMID:25132723

  15. Initial clinical experience with a video-based patient positioning system.

    PubMed

    Johnson, L S; Milliken, B D; Hadley, S W; Pelizzari, C A; Haraf, D J; Chen, G T

    1999-08-01

    To report initial clinical experience with an interactive, video-based patient positioning system that is inexpensive, quick, accurate, and easy to use. System hardware includes two black-and-white CCD cameras, zoom lenses, and a PC equipped with a frame grabber. Custom software is used to acquire and archive video images, as well as to display real-time subtraction images revealing patient misalignment in multiple views. Two studies are described. In the first study, video is used to document the daily setup histories of 5 head and neck patients. Time-lapse cine loops are generated for each patient and used to diagnose and correct common setup errors. In the second study, 6 twice-daily (BID) head and neck patients are positioned according to the following protocol: at AM setups conventional treatment room lasers are used; at PM setups lasers are used initially and then video is used for 1-2 minutes to fine-tune the patient position. Lateral video images and lateral verification films are registered off-line to compare the distribution of setup errors per patient, with and without video assistance. In the first study, video images were used to determine the accuracy of our conventional head and neck setup technique, i.e., alignment of lightcast marks and surface anatomy to treatment room lasers and the light field. For this initial cohort of patients, errors ranged from sigma = 5 to 7 mm and were patient-specific. Time-lapse cine loops of the images revealed sources of the error, and as a result, our localization techniques and immobilization device were modified to improve setup accuracy. After the improvements, conventional setup errors were reduced to sigma = 3 to 5 mm. In the second study, when a stereo pair of live subtraction images were introduced to perform daily "on-line" setup correction, errors were reduced to sigma = 1 to 3 mm. Results depended on patient health and cooperation and the length of time spent fine-tuning the position. An interactive, video-based patient positioning system was shown to reduce setup errors to within 1 to 3 mm in head and neck patients, without a significant increase in overall treatment time or labor-intensive procedures. Unlike retrospective portal image analysis, use of two live-video images provides the therapists with immediate feedback and allows for true 3-D positioning and correction of out-of-plane rotation before radiation is delivered. With significant improvement in head and neck alignment and the elimination of setup errors greater than 3 to 5 mm, margins associated with treatment volumes potentially can be reduced, thereby decreasing normal tissue irradiation.

  16. What do we do with all this video? Better understanding public engagement for image and video annotation

    NASA Astrophysics Data System (ADS)

    Wiener, C.; Miller, A.; Zykov, V.

    2016-12-01

    Advanced robotic vehicles are increasingly being used by oceanographic research vessels to enable more efficient and widespread exploration of the ocean, particularly the deep ocean. With cutting-edge capabilities mounted onto robotic vehicles, data at high resolutions is being generated more than ever before, enabling enhanced data collection and the potential for broader participation. For example, high resolution camera technology not only improves visualization of the ocean environment, but also expands the capacity to engage participants remotely through increased use of telepresence and virtual reality techniques. Schmidt Ocean Institute is a private, non-profit operating foundation established to advance the understanding of the world's oceans through technological advancement, intelligent observation and analysis, and open sharing of information. Telepresence-enabled research is an important component of Schmidt Ocean Institute's science research cruises, which this presentation will highlight. Schmidt Ocean Institute is one of the only research programs that make their entire underwater vehicle dive series available online, creating a collection of video that enables anyone to follow deep sea research in real time. We encourage students, educators and the general public to take advantage of freely available dive videos. Additionally, other SOI-supported internet platforms, have engaged the public in image and video annotation activities. Examples of these new online platforms, which utilize citizen scientists to annotate scientific image and video data will be provided. This presentation will include an introduction to SOI-supported video and image tagging citizen science projects, real-time robot tracking, live ship-to-shore communications, and an array of outreach activities that enable scientists to interact with the public and explore the ocean in fascinating detail.

  17. Using Deep Learning Algorithm to Enhance Image-review Software for Surveillance Cameras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Yonggang; Thomas, Maikael A.

    We propose the development of proven deep learning algorithms to flag objects and events of interest in Next Generation Surveillance System (NGSS) surveillance to make IAEA image review more efficient. Video surveillance is one of the core monitoring technologies used by the IAEA Department of Safeguards when implementing safeguards at nuclear facilities worldwide. The current image review software GARS has limited automated functions, such as scene-change detection, black image detection and missing scene analysis, but struggles with highly cluttered backgrounds. A cutting-edge algorithm to be developed in this project will enable efficient and effective searches in images and video streamsmore » by identifying and tracking safeguards relevant objects and detect anomalies in their vicinity. In this project, we will develop the algorithm, test it with the IAEA surveillance cameras and data sets collected at simulated nuclear facilities at BNL and SNL, and implement it in a software program for potential integration into the IAEA’s IRAP (Integrated Review and Analysis Program).« less

  18. 17 CFR 232.304 - Graphic, image, audio and video material.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...

  19. 17 CFR 232.304 - Graphic, image, audio and video material.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...

  20. 17 CFR 232.304 - Graphic, image, audio and video material.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...

  1. 17 CFR 232.304 - Graphic, image, audio and video material.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...

  2. 17 CFR 232.304 - Graphic, image, audio and video material.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...

  3. NV-CMOS HD camera for day/night imaging

    NASA Astrophysics Data System (ADS)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  4. Comparison of form in potential functions while maintaining upright posture during exposure to stereoscopic video clips.

    PubMed

    Kutsuna, Kenichiro; Matsuura, Yasuyuki; Fujikake, Kazuhiro; Miyao, Masaru; Takada, Hiroki

    2013-01-01

    Visually induced motion sickness (VIMS) is caused by sensory conflict, the disagreement between vergence and visual accommodation while observing stereoscopic images. VIMS can be measured by psychological and physiological methods. We propose a mathematical methodology to measure the effect of three-dimensional (3D) images on the equilibrium function. In this study, body sway in the resting state is compared with that during exposure to 3D video clips on a liquid crystal display (LCD) and on a head mounted display (HMD). In addition, the Simulator Sickness Questionnaire (SSQ) was completed immediately afterward. Based on the statistical analysis of the SSQ subscores and each index for stabilograms, we succeeded in determining the quantity of the VIMS during exposure to the stereoscopic images. Moreover, we discuss the metamorphism in the potential functions to control the standing posture during the exposure to stereoscopic video clips.

  5. Encrypting Digital Camera with Automatic Encryption Key Deletion

    NASA Technical Reports Server (NTRS)

    Oakley, Ernest C. (Inventor)

    2007-01-01

    A digital video camera includes an image sensor capable of producing a frame of video data representing an image viewed by the sensor, an image memory for storing video data such as previously recorded frame data in a video frame location of the image memory, a read circuit for fetching the previously recorded frame data, an encryption circuit having an encryption key input connected to receive the previously recorded frame data from the read circuit as an encryption key, an un-encrypted data input connected to receive the frame of video data from the image sensor and an encrypted data output port, and a write circuit for writing a frame of encrypted video data received from the encrypted data output port of the encryption circuit to the memory and overwriting the video frame location storing the previously recorded frame data.

  6. Real-time lens distortion correction: speed, accuracy and efficiency

    NASA Astrophysics Data System (ADS)

    Bax, Michael R.; Shahidi, Ramin

    2014-11-01

    Optical lens systems suffer from nonlinear geometrical distortion. Optical imaging applications such as image-enhanced endoscopy and image-based bronchoscope tracking require correction of this distortion for accurate localization, tracking, registration, and measurement of image features. Real-time capability is desirable for interactive systems and live video. The use of a texture-mapping graphics accelerator, which is standard hardware on current motherboard chipsets and add-in video graphics cards, to perform distortion correction is proposed. Mesh generation for image tessellation, an error analysis, and performance results are presented. It is shown that distortion correction using commodity graphics hardware is substantially faster than using the main processor and can be performed at video frame rates (faster than 30 frames per second), and that the polar-based method of mesh generation proposed here is more accurate than a conventional grid-based approach. Using graphics hardware to perform distortion correction is not only fast and accurate but also efficient as it frees the main processor for other tasks, which is an important issue in some real-time applications.

  7. Video-based teleradiology for intraosseous lesions. A receiver operating characteristic analysis.

    PubMed

    Tyndall, D A; Boyd, K S; Matteson, S R; Dove, S B

    1995-11-01

    Immediate access to off-site expert diagnostic consultants regarding unusual radiographic findings or radiographic quality assurance issues could be a current problem for private dental practitioners. Teleradiology, a system for transmitting radiographic images, offers a potential solution to this problem. Although much research has been done to evaluate feasibility and utilization of teleradiology systems in medical imaging, little research on dental applications has been performed. In this investigation 47 panoramic films with an equal distribution of images with intraosseous jaw lesions and no disease were viewed by a panel of observers with teleradiology and conventional viewing methods. The teleradiology system consisted of an analog video-based system simulating remote radiographic consultation between a general dentist and a dental imaging specialist. Conventional viewing consisted of traditional viewbox methods. Observers were asked to identify the presence or absence of 24 intraosseous lesions and to determine their locations. No statistically significant differences in modalities or observers were identified between methods at the 0.05 level. The results indicate that viewing intraosseous lesions of video-based panoramic images is equal to conventional light box viewing.

  8. "F*ck It! Let's Get to Drinking-Poison our Livers!": a Thematic Analysis of Alcohol Content in Contemporary YouTube MusicVideos.

    PubMed

    Cranwell, Jo; Britton, John; Bains, Manpreet

    2017-02-01

    The purpose of the present study is to describe the portrayal of alcohol content in popular YouTube music videos. We used inductive thematic analysis to explore the lyrics and visual imagery in 49 UK Top 40 songs and music videos previously found to contain alcohol content and watched by many British adolescents aged between 11 and 18 years and to examine if branded content contravened alcohol industry advertising codes of practice. The analysis generated three themes. First, alcohol content was associated with sexualised imagery or lyrics and the objectification of women. Second, alcohol was associated with image, lifestyle and sociability. Finally, some videos showed alcohol overtly encouraging excessive drinking and drunkenness, including those containing branding, with no negative consequences to the drinker. Our results suggest that YouTube music videos promote positive associations with alcohol use. Further, several alcohol companies adopt marketing strategies in the video medium that are entirely inconsistent with their own or others agreed advertising codes of practice. We conclude that, as a harm reduction measure, policies should change to prevent adolescent exposure to the positive promotion of alcohol and alcohol branding in music videos.

  9. Getting the Bigger Picture With Digital Surveillance

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Through a Space Act Agreement, Diebold, Inc., acquired the exclusive rights to Glenn Research Center's patented video observation technology, originally designed to accelerate video image analysis for various ongoing and future space applications. Diebold implemented the technology into its AccuTrack digital, color video recorder, a state-of- the-art surveillance product that uses motion detection for around-the- clock monitoring. AccuTrack captures digitally signed images and transaction data in real-time. This process replaces the onerous tasks involved in operating a VCR-based surveillance system, and subsequently eliminates the need for central viewing and tape archiving locations altogether. AccuTrack can monitor an entire bank facility, including four automated teller machines, multiple teller lines, and new account areas, all from one central location.

  10. Current and future trends in marine image annotation software

    NASA Astrophysics Data System (ADS)

    Gomes-Pereira, Jose Nuno; Auger, Vincent; Beisiegel, Kolja; Benjamin, Robert; Bergmann, Melanie; Bowden, David; Buhl-Mortensen, Pal; De Leo, Fabio C.; Dionísio, Gisela; Durden, Jennifer M.; Edwards, Luke; Friedman, Ariell; Greinert, Jens; Jacobsen-Stout, Nancy; Lerner, Steve; Leslie, Murray; Nattkemper, Tim W.; Sameoto, Jessica A.; Schoening, Timm; Schouten, Ronald; Seager, James; Singh, Hanumant; Soubigou, Olivier; Tojeira, Inês; van den Beld, Inge; Dias, Frederico; Tempera, Fernando; Santos, Ricardo S.

    2016-12-01

    Given the need to describe, analyze and index large quantities of marine imagery data for exploration and monitoring activities, a range of specialized image annotation tools have been developed worldwide. Image annotation - the process of transposing objects or events represented in a video or still image to the semantic level, may involve human interactions and computer-assisted solutions. Marine image annotation software (MIAS) have enabled over 500 publications to date. We review the functioning, application trends and developments, by comparing general and advanced features of 23 different tools utilized in underwater image analysis. MIAS requiring human input are basically a graphical user interface, with a video player or image browser that recognizes a specific time code or image code, allowing to log events in a time-stamped (and/or geo-referenced) manner. MIAS differ from similar software by the capability of integrating data associated to video collection, the most simple being the position coordinates of the video recording platform. MIAS have three main characteristics: annotating events in real time, posteriorly to annotation and interact with a database. These range from simple annotation interfaces, to full onboard data management systems, with a variety of toolboxes. Advanced packages allow to input and display data from multiple sensors or multiple annotators via intranet or internet. Posterior human-mediated annotation often include tools for data display and image analysis, e.g. length, area, image segmentation, point count; and in a few cases the possibility of browsing and editing previous dive logs or to analyze the annotations. The interaction with a database allows the automatic integration of annotations from different surveys, repeated annotation and collaborative annotation of shared datasets, browsing and querying of data. Progress in the field of automated annotation is mostly in post processing, for stable platforms or still images. Integration into available MIAS is currently limited to semi-automated processes of pixel recognition through computer-vision modules that compile expert-based knowledge. Important topics aiding the choice of a specific software are outlined, the ideal software is discussed and future trends are presented.

  11. An objective method for a video quality evaluation in a 3DTV service

    NASA Astrophysics Data System (ADS)

    Wilczewski, Grzegorz

    2015-09-01

    The following article describes proposed objective method for a 3DTV video quality evaluation, a Compressed Average Image Intensity (CAII) method. Identification of the 3DTV service's content chain nodes enables to design a versatile, objective video quality metric. It is based on an advanced approach to the stereoscopic videostream analysis. Insights towards designed metric mechanisms, as well as the evaluation of performance of the designed video quality metric, in the face of the simulated environmental conditions are herein discussed. As a result, created CAII metric might be effectively used in a variety of service quality assessment applications.

  12. Robotic Vehicle Communications Interoperability

    DTIC Science & Technology

    1988-08-01

    starter (cold start) X X Fire suppression X Fording control X Fuel control X Fuel tank selector X Garage toggle X Gear selector X X X X Hazard warning...optic Sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor control...optic sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor

  13. Mesoscale and severe storms (Mass) data management and analysis system

    NASA Technical Reports Server (NTRS)

    Hickey, J. S.; Karitani, S.; Dickerson, M.

    1984-01-01

    Progress on the Mesoscale and Severe Storms (MASS) data management and analysis system is described. An interactive atmospheric data base management software package to convert four types of data (Sounding, Single Level, Grid, Image) into standard random access formats is implemented and integrated with the MASS AVE80 Series general purpose plotting and graphics display data analysis software package. An interactive analysis and display graphics software package (AVE80) to analyze large volumes of conventional and satellite derived meteorological data is enhanced to provide imaging/color graphics display utilizing color video hardware integrated into the MASS computer system. Local and remote smart-terminal capability is provided by installing APPLE III computer systems within individual scientist offices and integrated with the MASS system, thus providing color video display, graphics, and characters display of the four data types.

  14. Multilocation Video Conference By Optical Fiber

    NASA Astrophysics Data System (ADS)

    Gray, Donald J.

    1982-10-01

    An experimental system that permits interconnection of many offices in a single video conference is described. Video images transmitted to conference participants are selected by the conference chairman and switched by a microprocessor-controlled video switch. Speakers can, at their choice, transmit their own images or images of graphics they wish to display. Users are connected to the Switching Center by optical fiber subscriber loops that carry analog video, digitized telephone, data and signaling. The same system also provides user-selectable distribution of video program and video library material. Experience in the operation of the conference system is discussed.

  15. Texture-adaptive hyperspectral video acquisition system with a spatial light modulator

    NASA Astrophysics Data System (ADS)

    Fang, Xiaojing; Feng, Jiao; Wang, Yongjin

    2014-10-01

    We present a new hybrid camera system based on spatial light modulator (SLM) to capture texture-adaptive high-resolution hyperspectral video. The hybrid camera system records a hyperspectral video with low spatial resolution using a gray camera and a high-spatial resolution video using a RGB camera. The hyperspectral video is subsampled by the SLM. The subsampled points can be adaptively selected according to the texture characteristic of the scene by combining with digital imaging analysis and computational processing. In this paper, we propose an adaptive sampling method utilizing texture segmentation and wavelet transform (WT). We also demonstrate the effectiveness of the sampled pattern on the SLM with the proposed method.

  16. Automated benthic counting of living and non-living components in Ngedarrak Reef, Palau via subsurface underwater video.

    PubMed

    Marcos, Ma Shiela Angeli; David, Laura; Peñaflor, Eileen; Ticzon, Victor; Soriano, Maricor

    2008-10-01

    We introduce an automated benthic counting system in application for rapid reef assessment that utilizes computer vision on subsurface underwater reef video. Video acquisition was executed by lowering a submersible bullet-type camera from a motor boat while moving across the reef area. A GPS and echo sounder were linked to the video recorder to record bathymetry and location points. Analysis of living and non-living components was implemented through image color and texture feature extraction from the reef video frames and classification via Linear Discriminant Analysis. Compared to common rapid reef assessment protocols, our system can perform fine scale data acquisition and processing in one day. Reef video was acquired in Ngedarrak Reef, Koror, Republic of Palau. Overall success performance ranges from 60% to 77% for depths of 1 to 3 m. The development of an automated rapid reef classification system is most promising for reef studies that need fast and frequent data acquisition of percent cover of living and nonliving components.

  17. Computer-aided analysis for the Mechanics of Granular Materials (MGM) experiment

    NASA Technical Reports Server (NTRS)

    Parker, Joey K.

    1986-01-01

    The Mechanics of Granular Materials (MGM) program is planned to provide experimental determinations of the mechanics of granular materials under very low gravity conditions. The initial experiments will use small glass beads as the granular material, and a precise tracking of individual beads during the test is desired. Real-time video images of the experimental specimen were taken with a television camera, and subsequently digitized by a frame grabber installed in a microcomputer. Easily identified red tracer beads were randomly scattered throughout the test specimen. A set of Pascal programs was written for processing and analyzing the digitized images. Filtering the image with Laplacian, dilation, and blurring filters when using a threshold function produced a binary (black on white) image which clearly identified the red beads. The centroids and areas for each bead were then determined. Analyzing a series of the images determined individual red bead displacements throughout the experiment. The system can provide displacement accuracies on the order of 0.5 to 1 pixel is the image is taken directly from the video camera. Digitizing an image from a video cassette recorder introduces an additional repeatability error of 0.5 to 1 pixel. Other programs were written to provide hardcopy prints of the digitized images on a dot-matrix printer.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shoaf, S.; APS Engineering Support Division

    A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.

  19. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The Ames digital image velocimetry technology has been incorporated in a commercially available image processing software package that allows motion measurement of images on a PC alone. The software, manufactured by Werner Frei Associates, is IMAGELAB FFT. IMAGELAB FFT is a general purpose image processing system with a variety of other applications, among them image enhancement of fingerprints and use by banks and law enforcement agencies for analysis of videos run during robberies.

  20. Analysis of physiological responses associated with emotional changes induced by viewing video images of dental treatments.

    PubMed

    Sekiya, Taki; Miwa, Zenzo; Tsuchihashi, Natsumi; Uehara, Naoko; Sugimoto, Kumiko

    2015-03-30

    Since the understanding of emotional changes induced by dental treatments is important for dentists to provide a safe and comfortable dental treatment, we analyzed physiological responses during watching video images of dental treatments to search for the appropriate objective indices reflecting emotional changes. Fifteen healthy young adult subjects voluntarily participated in the present study. Electrocardiogram (ECG), electroencephalogram (EEG) and corrugator muscle electromyogram (EMG) were recorded and changes of them by viewing videos of dental treatments were analyzed. The subjective discomfort level was acquired by Visual Analog Scale method. Analyses of autonomic nervous activities from ECG and four emotional factors (anger/stress, joy/satisfaction, sadness/depression and relaxation) from EEG demonstrated that increases in sympathetic nervous activity reflecting stress increase and decreases in relaxation level were induced by the videos of infiltration anesthesia and cavity excavation, but not intraoral examination. The corrugator muscle activity was increased by all three images regardless of video contents. The subjective discomfort during watching infiltration anesthesia and cavity excavation was higher than intraoral examination, showing that sympathetic activities and relaxation factor of emotion changed in a manner consistent with subjective emotional changes. These results suggest that measurement of autonomic nervous activities estimated from ECG and emotional factors analyzed from EEG is useful for objective evaluation of subjective emotion.

  1. Computerized tomography using video recorded fluoroscopic images

    NASA Technical Reports Server (NTRS)

    Kak, A. C.; Jakowatz, C. V., Jr.; Baily, N. A.; Keller, R. A.

    1975-01-01

    A computerized tomographic imaging system is examined which employs video-recorded fluoroscopic images as input data. By hooking the video recorder to a digital computer through a suitable interface, such a system permits very rapid construction of tomograms.

  2. Psychophysical Comparison Of A Video Display System To Film By Using Bone Fracture Images

    NASA Astrophysics Data System (ADS)

    Seeley, George W.; Stempski, Mark; Roehrig, Hans; Nudelman, Sol; Capp, M. P.

    1982-11-01

    This study investigated the possibility of using a video display system instead of film for radiological diagnosis. Also investigated were the relationships between characteristics of the system and the observer's accuracy level. Radiologists were used as observers. Thirty-six clinical bone fractures were separated into two matched sets of equal difficulty. The difficulty parameters and ratings were defined by a panel of expert bone radiologists at the Arizona Health Sciences Center, Radiology Department. These two sets of fracture images were then matched with verifiably normal images using parameters such as film type, angle of view, size, portion of anatomy, the film's density range, and the patient's age and sex. The two sets of images were then displayed, using a counterbalanced design, to each of the participating radiologists for diagnosis. Whenever a response was given to a video image, the radiologist used enhancement controls to "window in" on the grey levels of interest. During the TV phase, the radiologist was required to record the settings of the calibrated controls of the image enhancer during interpretation. At no time did any single radiologist see the same film in both modes. The study was designed so that a standard analysis of variance would show the effects of viewing mode (film vs TV), the effects due to stimulus set, and any interactions with observers. A signal detection analysis of observer performance was also performed. Results indicate that the TV display system is almost as good as the view box display; an average of only two more errors were made on the TV display. The difference between the systems has been traced to four observers who had poor accuracy on a small number of films viewed on the TV display. This information is now being correlated with the video system's signal-to-noise ratio (SNR), signal transfer function (STF), and resolution measurements, to obtain information on the basic display and enhancement requirements for a video-based radiologic system. Due to time constraints the results are not included here. The complete results of this study will be reported at the conference.

  3. A semi-automated software tool to study treadmill locomotion in the rat: from experiment videos to statistical gait analysis.

    PubMed

    Gravel, P; Tremblay, M; Leblond, H; Rossignol, S; de Guise, J A

    2010-07-15

    A computer-aided method for the tracking of morphological markers in fluoroscopic images of a rat walking on a treadmill is presented and validated. The markers correspond to bone articulations in a hind leg and are used to define the hip, knee, ankle and metatarsophalangeal joints. The method allows a user to identify, using a computer mouse, about 20% of the marker positions in a video and interpolate their trajectories from frame-to-frame. This results in a seven-fold speed improvement in detecting markers. This also eliminates confusion problems due to legs crossing and blurred images. The video images are corrected for geometric distortions from the X-ray camera, wavelet denoised, to preserve the sharpness of minute bone structures, and contrast enhanced. From those images, the marker positions across video frames are extracted, corrected for rat "solid body" motions on the treadmill, and used to compute the positional and angular gait patterns. Robust Bootstrap estimates of those gait patterns and their prediction and confidence bands are finally generated. The gait patterns are invaluable tools to study the locomotion of healthy animals or the complex process of locomotion recovery in animals with injuries. The method could, in principle, be adapted to analyze the locomotion of other animals as long as a fluoroscopic imager and a treadmill are available. Copyright 2010 Elsevier B.V. All rights reserved.

  4. A system for endobronchial video analysis

    NASA Astrophysics Data System (ADS)

    Byrnes, Patrick D.; Higgins, William E.

    2017-03-01

    Image-guided bronchoscopy is a critical component in the treatment of lung cancer and other pulmonary disorders. During bronchoscopy, a high-resolution endobronchial video stream facilitates guidance through the lungs and allows for visual inspection of a patient's airway mucosal surfaces. Despite the detailed information it contains, little effort has been made to incorporate recorded video into the clinical workflow. Follow-up procedures often required in cancer assessment or asthma treatment could significantly benefit from effectively parsed and summarized video. Tracking diagnostic regions of interest (ROIs) could potentially better equip physicians to detect early airway-wall cancer or improve asthma treatments, such as bronchial thermoplasty. To address this need, we have developed a system for the postoperative analysis of recorded endobronchial video. The system first parses an input video stream into endoscopic shots, derives motion information, and selects salient representative key frames. Next, a semi-automatic method for CT-video registration creates data linkages between a CT-derived airway-tree model and the input video. These data linkages then enable the construction of a CT-video chest model comprised of a bronchoscopy path history (BPH) - defining all airway locations visited during a procedure - and texture-mapping information for rendering registered video frames onto the airwaytree model. A suite of analysis tools is included to visualize and manipulate the extracted data. Video browsing and retrieval is facilitated through a video table of contents (TOC) and a search query interface. The system provides a variety of operational modes and additional functionality, including the ability to define regions of interest. We demonstrate the potential of our system using two human case study examples.

  5. Picturing Video

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Video Pics is a software program that generates high-quality photos from video. The software was developed under an SBIR contract with Marshall Space Flight Center by Redhawk Vision, Inc.--a subsidiary of Irvine Sensors Corporation. Video Pics takes information content from multiple frames of video and enhances the resolution of a selected frame. The resulting image has enhanced sharpness and clarity like that of a 35 mm photo. The images are generated as digital files and are compatible with image editing software.

  6. Analysis of Preoperative Airway Examination with the CMOS Video Rhino-laryngoscope.

    PubMed

    Tsukamoto, Masanori; Hitosugi, Takashi; Yokoyama, Takeshi

    2017-05-01

    Endoscopy is one of the most useful clinical techniques in difficult airway management Comparing with the fibroptic endoscope, this compact device is easy to operate and can provide the clear image. In this study, we investigated its usefulness in the preoperative examination of endoscopy. Patients undergoing oral maxillofacial surgery were enrolled in this study. We performed preoperative airway examination by electronic endoscope (The CMOS video rhino-laryngoscope, KARL STORZ Endoscopy Japan, Tokyo). The system is composed of a videoendoscope, a compact video processor and a video recorder. In addition, the endoscope has a small color charge coupled device (CMOS) chip built into the tip of the endoscope. The outer diameter of the tip of this scope is 3.7 mm. In this study, electronic endoscope was used for preoperative airway examination in 7 patients. The preoperative airway examination with electronic endoscope was performed successfully in all the patients except one patient The patient had the symptoms such as nausea and vomiting at the examination. We could perform preoperative airway examination with excellent visualization and convenient recording of video sequence images with the CMOS video rhino-laryngoscope. It might be a especially useful device for the patients of difficult airways.

  7. Snapshot spectral and polarimetric imaging; target identification with multispectral video

    NASA Astrophysics Data System (ADS)

    Bartlett, Brent D.; Rodriguez, Mikel D.

    2013-05-01

    As the number of pixels continue to grow in consumer and scientific imaging devices, it has become feasible to collect the incident light field. In this paper, an imaging device developed around light field imaging is used to collect multispectral and polarimetric imagery in a snapshot fashion. The sensor is described and a video data set is shown highlighting the advantage of snapshot spectral imaging. Several novel computer vision approaches are applied to the video cubes to perform scene characterization and target identification. It is shown how the addition of spectral and polarimetric data to the video stream allows for multi-target identification and tracking not possible with traditional RGB video collection.

  8. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA Marshall Space Flight Center, atmospheric scientist Paul Meyer (left) and solar physicist Dr. David Hathaway, have developed promising new software, called Video Image Stabilization and Registration (VISAR), that may help law enforcement agencies to catch criminals by improving the quality of video recorded at crime scenes, VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects; produces clearer images of moving objects; smoothes jagged edges; enhances still images; and reduces video noise of snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of Ultrasounds which are infamous for their grainy, blurred quality. It would be especially useful for tornadoes, tracking whirling objects and helping to determine the tornado's wind speed. This image shows two scientists reviewing an enhanced video image of a license plate taken from a moving automobile.

  9. From Video to Photo

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Ever wonder whether a still shot from a home video could serve as a "picture perfect" photograph worthy of being framed and proudly displayed on the mantle? Wonder no more. A critical imaging code used to enhance video footage taken from spaceborne imaging instruments is now available within a portable photography tool capable of producing an optimized, high-resolution image from multiple video frames.

  10. 13 point video tape quality guidelines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaunt, R.

    1997-05-01

    Until high definition television (ATV) arrives, in the U.S. we must still contend with the National Television Systems Committee (NTSC) video standard (or PAL or SECAM-depending on your country). NTSC, a 40-year old standard designed for transmission of color video camera images over a small bandwidth, is not well suited for the sharp, full-color images that todays computers are capable of producing. PAL and SECAM also suffers from many of NTSC`s problems, but to varying degrees. Video professionals, when working with computer graphic (CG) images, use two monitors: a computer monitor for producing CGs and an NTSC monitor to viewmore » how a CG will look on video. More often than not, the NTSC image will differ significantly from the CG image, and outputting it to NTSC as an artist works enables the him or her to see the images as others will see it. Below are thirteen guidelines designed to increase the quality of computer graphics recorded onto video tape. Viewing your work in NTSC and attempting to follow the below tips will enable you to create higher quality videos. No video is perfect, so don`t expect to abide by every guideline every time.« less

  11. Vascularization of bioprosthetic valve material

    NASA Astrophysics Data System (ADS)

    Boughner, Derek R.; Dunmore-Buyze, Joy; Heenatigala, Dino; Lohmann, Tara; Ellis, Chris G.

    1999-04-01

    Cell membrane remnants represent a probable nucleation site for calcium deposition in bioprosthetic heart valves. Calcification is a primary failure mode of both bovine pericardial and porcine aortic heterograft bioprosthesis but the nonuniform pattern of calcium distribution within the tissue remains unexplained. Searching for a likely cellular source, we considered the possibility of a previously overlooked small blood vessel network. Using a videomicroscopy technique, we examined 5 matched pairs of porcine aortic and pulmonary valves and 14 samples from 6 bovine pericardia. Tissue was placed on a Leitz Metallux microscope and transilluminated with a 75 watt mercury lamp. Video images were obtained using a silicon intensified target camera equipped with a 431 nm interference filter to maximize contrast of red cells trapped in a capillary microvasculature. Video images were recorded for analysis on a Silicon Graphics Image Analysis work station equipped with a video frame grabber. For porcine valves, the technique demonstrated a vascular bed in the central spongiosa at cusp bases with vessel sizes from 6-80 micrometers . Bovine pericardium differed with a more uniform distribution of 7-100 micrometers vessels residing centrally. Thus, small blood vessel endothelial cells provide a potential explanation patterns of bioprosthetic calcification.

  12. High frequency mode shapes characterisation using Digital Image Correlation and phase-based motion magnification

    NASA Astrophysics Data System (ADS)

    Molina-Viedma, A. J.; Felipe-Sesé, L.; López-Alba, E.; Díaz, F.

    2018-03-01

    High speed video cameras provide valuable information in dynamic events. Mechanical characterisation has been improved by the interpretation of the behaviour in slow-motion visualisations. In modal analysis, videos contribute to the evaluation of mode shapes but, generally, the motion is too subtle to be interpreted. In latest years, image treatment algorithms have been developed to generate a magnified version of the motion that could be interpreted by naked eye. Nevertheless, optical techniques such as Digital Image Correlation (DIC) are able to provide quantitative information of the motion with higher sensitivity than naked eye. For vibration analysis, mode shapes characterisation is one of the most interesting DIC performances. Full-field measurements provide higher spatial density than classical instrumentations or Scanning Laser Doppler Vibrometry. However, the accurateness of DIC is reduced at high frequencies as a consequence of the low displacements and hence it is habitually employed in low frequency spectra. In the current work, the combination of DIC and motion magnification is explored in order to provide numerical information in magnified videos and perform DIC mode shapes characterisation at unprecedented high frequencies through increasing the amplitude of displacements.

  13. A system for beach video-monitoring: Beachkeeper plus

    NASA Astrophysics Data System (ADS)

    Brignone, Massimo; Schiaffino, Chiara F.; Isla, Federico I.; Ferrari, Marco

    2012-12-01

    A suitable knowledge of coastal systems, of their morphodynamic characteristics and their response to storm events and man-made structures is essential for littoral conservation and management. Nowadays webcams represent a useful device to obtain information from beaches. Video-monitoring techniques are generally site specific and softwares working with any image acquisition system are rare. Therefore, this work aims at submitting theory and applications of an experimental video monitoring software: Beachkeeper plus, a freeware non-profit software, can be employed and redistributed without modifications. A license file is provided inside software package and in the user guide. Beachkeeper plus is based on Matlab® and it can be used for the analysis of images and photos coming from any kind of acquisition system (webcams, digital cameras or images downloaded from internet), without any a-priori information or laboratory study of the acquisition system itself. Therefore, it could become a useful tool for beach planning. Through a simple guided interface, images can be analyzed by performing georeferentiation, rectification, averaging and variance. This software was initially operated in Pietra Ligure (Italy), using images from a tourist webcam, and in Mar del Plata (Argentina) using images from a digital camera. In both cases the reliability in different geomorphologic and morphodynamic conditions was confirmed by the good quality of obtained images after georeferentiation, rectification and averaging.

  14. Otitis Media Diagnosis for Developing Countries Using Tympanic Membrane Image-Analysis.

    PubMed

    Myburgh, Hermanus C; van Zijl, Willemien H; Swanepoel, DeWet; Hellström, Sten; Laurent, Claude

    2016-03-01

    Otitis media is one of the most common childhood diseases worldwide, but because of lack of doctors and health personnel in developing countries it is often misdiagnosed or not diagnosed at all. This may lead to serious, and life-threatening complications. There is, thus a need for an automated computer based image-analyzing system that could assist in making accurate otitis media diagnoses anywhere. A method for automated diagnosis of otitis media is proposed. The method uses image-processing techniques to classify otitis media. The system is trained using high quality pre-assessed images of tympanic membranes, captured by digital video-otoscopes, and classifies undiagnosed images into five otitis media categories based on predefined signs. Several verification tests analyzed the classification capability of the method. An accuracy of 80.6% was achieved for images taken with commercial video-otoscopes, while an accuracy of 78.7% was achieved for images captured on-site with a low cost custom-made video-otoscope. The high accuracy of the proposed otitis media classification system compares well with the classification accuracy of general practitioners and pediatricians (~64% to 80%) using traditional otoscopes, and therefore holds promise for the future in making automated diagnosis of otitis media in medically underserved populations.

  15. Otitis Media Diagnosis for Developing Countries Using Tympanic Membrane Image-Analysis

    PubMed Central

    Myburgh, Hermanus C.; van Zijl, Willemien H.; Swanepoel, DeWet; Hellström, Sten; Laurent, Claude

    2016-01-01

    Background Otitis media is one of the most common childhood diseases worldwide, but because of lack of doctors and health personnel in developing countries it is often misdiagnosed or not diagnosed at all. This may lead to serious, and life-threatening complications. There is, thus a need for an automated computer based image-analyzing system that could assist in making accurate otitis media diagnoses anywhere. Methods A method for automated diagnosis of otitis media is proposed. The method uses image-processing techniques to classify otitis media. The system is trained using high quality pre-assessed images of tympanic membranes, captured by digital video-otoscopes, and classifies undiagnosed images into five otitis media categories based on predefined signs. Several verification tests analyzed the classification capability of the method. Findings An accuracy of 80.6% was achieved for images taken with commercial video-otoscopes, while an accuracy of 78.7% was achieved for images captured on-site with a low cost custom-made video-otoscope. Interpretation The high accuracy of the proposed otitis media classification system compares well with the classification accuracy of general practitioners and pediatricians (~ 64% to 80%) using traditional otoscopes, and therefore holds promise for the future in making automated diagnosis of otitis media in medically underserved populations. PMID:27077122

  16. Motmot, an open-source toolkit for realtime video acquisition and analysis.

    PubMed

    Straw, Andrew D; Dickinson, Michael H

    2009-07-22

    Video cameras sense passively from a distance, offer a rich information stream, and provide intuitively meaningful raw data. Camera-based imaging has thus proven critical for many advances in neuroscience and biology, with applications ranging from cellular imaging of fluorescent dyes to tracking of whole-animal behavior at ecologically relevant spatial scales. Here we present 'Motmot': an open-source software suite for acquiring, displaying, saving, and analyzing digital video in real-time. At the highest level, Motmot is written in the Python computer language. The large amounts of data produced by digital cameras are handled by low-level, optimized functions, usually written in C. This high-level/low-level partitioning and use of select external libraries allow Motmot, with only modest complexity, to perform well as a core technology for many high-performance imaging tasks. In its current form, Motmot allows for: (1) image acquisition from a variety of camera interfaces (package motmot.cam_iface), (2) the display of these images with minimal latency and computer resources using wxPython and OpenGL (package motmot.wxglvideo), (3) saving images with no compression in a single-pass, low-CPU-use format (package motmot.FlyMovieFormat), (4) a pluggable framework for custom analysis of images in realtime and (5) firmware for an inexpensive USB device to synchronize image acquisition across multiple cameras, with analog input, or with other hardware devices (package motmot.fview_ext_trig). These capabilities are brought together in a graphical user interface, called 'FView', allowing an end user to easily view and save digital video without writing any code. One plugin for FView, 'FlyTrax', which tracks the movement of fruit flies in real-time, is included with Motmot, and is described to illustrate the capabilities of FView. Motmot enables realtime image processing and display using the Python computer language. In addition to the provided complete applications, the architecture allows the user to write relatively simple plugins, which can accomplish a variety of computer vision tasks and be integrated within larger software systems. The software is available at http://code.astraw.com/projects/motmot.

  17. Blinded evaluation of the effects of high definition and magnification on perceived image quality in laryngeal imaging.

    PubMed

    Otto, Kristen J; Hapner, Edie R; Baker, Michael; Johns, Michael M

    2006-02-01

    Advances in commercial video technology have improved office-based laryngeal imaging. This study investigates the perceived image quality of a true high-definition (HD) video camera and the effect of magnification on laryngeal videostroboscopy. We performed a prospective, dual-armed, single-blinded analysis of a standard laryngeal videostroboscopic examination comparing 3 separate add-on camera systems: a 1-chip charge-coupled device (CCD) camera, a 3-chip CCD camera, and a true 720p (progressive scan) HD camera. Displayed images were controlled for magnification and image size (20-inch [50-cm] display, red-green-blue, and S-video cable for 1-chip and 3-chip cameras; digital visual interface cable and HD monitor for HD camera). Ten blinded observers were then asked to rate the following 5 items on a 0-to-100 visual analog scale: resolution, color, ability to see vocal fold vibration, sense of depth perception, and clarity of blood vessels. Eight unblinded observers were then asked to rate the difference in perceived resolution and clarity of laryngeal examination images when displayed on a 10-inch (25-cm) monitor versus a 42-inch (105-cm) monitor. A visual analog scale was used. These monitors were controlled for actual resolution capacity. For each item evaluated, randomized block design analysis demonstrated that the 3-chip camera scored significantly better than the 1-chip camera (p < .05). For the categories of color and blood vessel discrimination, the 3-chip camera scored significantly better than the HD camera (p < .05). For magnification alone, observers rated the 42-inch monitor statistically better than the 10-inch monitor. The expense of new medical technology must be judged against its added value. This study suggests that HD laryngeal imaging may not add significant value over currently available video systems, in perceived image quality, when a small monitor is used. Although differences in clarity between standard and HD cameras may not be readily apparent on small displays, a large display size coupled with HD technology may impart improved diagnosis of subtle vocal fold lesions and vibratory anomalies.

  18. A Kinect based sign language recognition system using spatio-temporal features

    NASA Astrophysics Data System (ADS)

    Memiş, Abbas; Albayrak, Songül

    2013-12-01

    This paper presents a sign language recognition system that uses spatio-temporal features on RGB video images and depth maps for dynamic gestures of Turkish Sign Language. Proposed system uses motion differences and accumulation approach for temporal gesture analysis. Motion accumulation method, which is an effective method for temporal domain analysis of gestures, produces an accumulated motion image by combining differences of successive video frames. Then, 2D Discrete Cosine Transform (DCT) is applied to accumulated motion images and temporal domain features transformed into spatial domain. These processes are performed on both RGB images and depth maps separately. DCT coefficients that represent sign gestures are picked up via zigzag scanning and feature vectors are generated. In order to recognize sign gestures, K-Nearest Neighbor classifier with Manhattan distance is performed. Performance of the proposed sign language recognition system is evaluated on a sign database that contains 1002 isolated dynamic signs belongs to 111 words of Turkish Sign Language (TSL) in three different categories. Proposed sign language recognition system has promising success rates.

  19. Efficient management and promotion of utilization of the video information acquired by observation

    NASA Astrophysics Data System (ADS)

    Kitayama, T.; Tanaka, K.; Shimabukuro, R.; Hase, H.; Ogido, M.; Nakamura, M.; Saito, H.; Hanafusa, Y.; Sonoda, A.

    2012-12-01

    In Japan Agency for Marine-Earth Science and Technology (JAMSTEC), the deep sea videos are made from the research by JAMSTEC submersibles in 1982, and the information on the huge deep-sea that will reach more 4,000 dives (ca. 24,700 tapes) by the present are opened to public via the Internet since 2002. The deep-sea videos is important because the time variation of deep-sea environment with difficult investigation and collection and growth of the living thing in extreme environment can be checked. Moreover, with development of video technique, the advanced analysis of an investigation image is attained. For grasp of deep sea environment, especially the utility value of the image is high. In JAMSTEC's Data Research Center for Marine-Earth Sciences (DrC), collection of the video are obtained by dive investigation of JAMSTEC, preservation, quality control, and open to public are performed. It is our big subject that the huge video information which utility value has expanded managed efficiently and promotion of use. In this announcement, the present measure is introduced about these subjects . The videos recorded on a tape or various media onboard are collected, and the backup and encoding for preventing the loss and degradation are performed. The video inside of a hard disk has the large file size. Then, we use the Linear Tape File System (LTFS) which attracts attention with image management engineering these days. Cost does not start compared with the usual disk backup, but correspondence years can also save the video data for a long time, and the operatively of a file is not different from a disk. The video that carried out the transcode to offer is archived by disk storage, and offer according to a use is possible for it. For the promotion of utilization of the video, the video public presentation system was reformed completely from November, 2011 to "JAMSTEC E-library of Deep Sea Images (http:// www.godac.jamstec.go.jp/jedi/)" This new system has preparing various searches (e.g. Search by map, Tree, Icon, Keyword et al.). The video annotation is enabled with the same interface, and the usability of use and management is raised. Moreover, In the "Biological Information System for Marine Life : BISMaL (http://www.godac.jamstec.go.jp/bismal/e/index.html)" which is a data system for biodiversity information, particularly in biogeographic data of marine organisms, based on photography position information, the visualization of living thing distribution, the life list of a deep sea living thing, and the deep sea video were used, and aim at the contribution to biodiversity grasp. Future, aiming at the accuracy improvement of the information given to the video by Work support of the comment registration by automatic recognition of an image and Development of a comment registration tool onboard, it aims at offering higher quality information.

  20. An approach to integrate the human vision psychology and perception knowledge into image enhancement

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Huang, Xifeng; Ping, Jiang

    2009-07-01

    Image enhancement is very important image preprocessing technology especially when the image is captured in the poor imaging condition or dealing with the high bits image. The benefactor of image enhancement either may be a human observer or a computer vision process performing some kind of higher-level image analysis, such as target detection or scene understanding. One of the main objects of the image enhancement is getting a high dynamic range image and a high contrast degree image for human perception or interpretation. So, it is very necessary to integrate either empirical or statistical human vision psychology and perception knowledge into image enhancement. The human vision psychology and perception claims that humans' perception and response to the intensity fluctuation δu of visual signals are weighted by the background stimulus u, instead of being plainly uniform. There are three main laws: Weber's law, Weber- Fechner's law and Stevens's Law that describe this phenomenon in the psychology and psychophysics. This paper will integrate these three laws of the human vision psychology and perception into a very popular image enhancement algorithm named Adaptive Plateau Equalization (APE). The experiments were done on the high bits star image captured in night scene and the infrared-red image both the static image and the video stream. For the jitter problem in the video stream, this algorithm reduces this problem using the difference between the current frame's plateau value and the previous frame's plateau value to correct the current frame's plateau value. Considering the random noise impacts, the pixel value mapping process is not only depending on the current pixel but the pixels in the window surround the current pixel. The window size is usually 3×3. The process results of this improved algorithms is evaluated by the entropy analysis and visual perception analysis. The experiments' result showed the improved APE algorithms improved the quality of the image, the target and the surrounding assistant targets could be identified easily, and the noise was not amplified much. For the low quality image, these improved algorithms augment the information entropy and improve the image and the video stream aesthetic quality, while for the high quality image they will not debase the quality of the image.

  1. Evaluation of privacy in high dynamic range video sequences

    NASA Astrophysics Data System (ADS)

    Řeřábek, Martin; Yuan, Lin; Krasula, Lukáš; Korshunov, Pavel; Fliegel, Karel; Ebrahimi, Touradj

    2014-09-01

    The ability of high dynamic range (HDR) to capture details in environments with high contrast has a significant impact on privacy in video surveillance. However, the extent to which HDR imaging affects privacy, when compared to a typical low dynamic range (LDR) imaging, is neither well studied nor well understood. To achieve such an objective, a suitable dataset of images and video sequences is needed. Therefore, we have created a publicly available dataset of HDR video for privacy evaluation PEViD-HDR, which is an HDR extension of an existing Privacy Evaluation Video Dataset (PEViD). PEViD-HDR video dataset can help in the evaluations of privacy protection tools, as well as for showing the importance of HDR imaging in video surveillance applications and its influence on the privacy-intelligibility trade-off. We conducted a preliminary subjective experiment demonstrating the usability of the created dataset for evaluation of privacy issues in video. The results confirm that a tone-mapped HDR video contains more privacy sensitive information and details compared to a typical LDR video.

  2. Teleradiology Via The Naval Remote Medical Diagnosis System (RMDS)

    NASA Astrophysics Data System (ADS)

    Rasmussen, Will; Stevens, Ilya; Gerber, F. H.; Kuhlman, Jayne A.

    1982-01-01

    Testing was conducted to obtain qualitative and quantitative (statistical) data on radiology performance using the Remote Medical Diagnosis System (RMDS) Advanced Development Models (ADMs)1. Based upon data collected during testing with professional radiologists, this analysis addresses the clinical utility of radiographic images transferred through six possible RMDS transmission modes. These radiographs were also viewed under closed-circuit television (CCTV) and lightbox conditions to provide a basis for comparison. The analysis indicates that the RMDS ADM terminals (with a system video resolution of 525 x 256 x 6) would provide satisfactory radiographic images for radiology consultations in emergency cases with gross pathological disorders. However, in cases involving more subtle findings, a system video resolution of 525 x 512 x 8 would be preferable.

  3. Analysis of Video-Based Microscopic Particle Trajectories Using Kalman Filtering

    PubMed Central

    Wu, Pei-Hsun; Agarwal, Ashutosh; Hess, Henry; Khargonekar, Pramod P.; Tseng, Yiider

    2010-01-01

    Abstract The fidelity of the trajectories obtained from video-based particle tracking determines the success of a variety of biophysical techniques, including in situ single cell particle tracking and in vitro motility assays. However, the image acquisition process is complicated by system noise, which causes positioning error in the trajectories derived from image analysis. Here, we explore the possibility of reducing the positioning error by the application of a Kalman filter, a powerful algorithm to estimate the state of a linear dynamic system from noisy measurements. We show that the optimal Kalman filter parameters can be determined in an appropriate experimental setting, and that the Kalman filter can markedly reduce the positioning error while retaining the intrinsic fluctuations of the dynamic process. We believe the Kalman filter can potentially serve as a powerful tool to infer a trajectory of ultra-high fidelity from noisy images, revealing the details of dynamic cellular processes. PMID:20550894

  4. On-line 3-dimensional confocal imaging in vivo.

    PubMed

    Li, J; Jester, J V; Cavanagh, H D; Black, T D; Petroll, W M

    2000-09-01

    In vivo confocal microscopy through focusing (CMTF) can provide a 3-D stack of high-resolution corneal images and allows objective measurements of corneal sublayer thickness and backscattering. However, current systems require time-consuming off-line image processing and analysis on multiple software platforms. Furthermore, there is a trade off between the CMTF speed and measurement precision. The purpose of this study was to develop a novel on-line system for in vivo corneal imaging and analysis that overcomes these limitations. A tandem scanning confocal microscope (TSCM) was used for corneal imaging. The TSCM video camera was interfaced directly to a PC image acquisition board to implement real-time digitization. Software was developed to allow in vivo 2-D imaging, CMTF image acquisition, interactive 3-D reconstruction, and analysis of CMTF data to be performed on line in a single user-friendly environment. A procedure was also incorporated to separate the odd/even video fields, thereby doubling the CMTF sampling rate and theoretically improving the precision of CMTF thickness measurements by a factor of two. In vivo corneal examinations of a normal human and a photorefractive keratectomy patient are presented to demonstrate the capabilities of the new system. Improvements in the convenience, speed, and functionality of in vivo CMTF image acquisition, display, and analysis are demonstrated. This is the first full-featured software package designed for in vivo TSCM imaging of the cornea, which performs both 2-D and 3-D image acquisition, display, and processing as well as CMTF analysis. The use of a PC platform and incorporation of easy to use, on line, and interactive features should help to improve the clinical utility of this technology.

  5. Efficient subtle motion detection from high-speed video for sound recovery and vibration analysis using singular value decomposition-based approach

    NASA Astrophysics Data System (ADS)

    Zhang, Dashan; Guo, Jie; Jin, Yi; Zhu, Chang'an

    2017-09-01

    High-speed cameras provide full field measurement of structure motions and have been applied in nondestructive testing and noncontact structure monitoring. Recently, a phase-based method has been proposed to extract sound-induced vibrations from phase variations in videos, and this method provides insights into the study of remote sound surveillance and material analysis. An efficient singular value decomposition (SVD)-based approach is introduced to detect sound-induced subtle motions from pixel intensities in silent high-speed videos. A high-speed camera is initially applied to capture a video of the vibrating objects stimulated by sound fluctuations. Then, subimages collected from a small region on the captured video are reshaped into vectors and reconstructed to form a matrix. Orthonormal image bases (OIBs) are obtained from the SVD of the matrix; available vibration signal can then be obtained by projecting subsequent subimages onto specific OIBs. A simulation test is initiated to validate the effectiveness and efficiency of the proposed method. Two experiments are conducted to demonstrate the potential applications in sound recovery and material analysis. Results show that the proposed method efficiently detects subtle motions from the video.

  6. Colonoscopy video quality assessment using hidden Markov random fields

    NASA Astrophysics Data System (ADS)

    Park, Sun Young; Sargent, Dusty; Spofford, Inbar; Vosburgh, Kirby

    2011-03-01

    With colonoscopy becoming a common procedure for individuals aged 50 or more who are at risk of developing colorectal cancer (CRC), colon video data is being accumulated at an ever increasing rate. However, the clinically valuable information contained in these videos is not being maximally exploited to improve patient care and accelerate the development of new screening methods. One of the well-known difficulties in colonoscopy video analysis is the abundance of frames with no diagnostic information. Approximately 40% - 50% of the frames in a colonoscopy video are contaminated by noise, acquisition errors, glare, blur, and uneven illumination. Therefore, filtering out low quality frames containing no diagnostic information can significantly improve the efficiency of colonoscopy video analysis. To address this challenge, we present a quality assessment algorithm to detect and remove low quality, uninformative frames. The goal of our algorithm is to discard low quality frames while retaining all diagnostically relevant information. Our algorithm is based on a hidden Markov model (HMM) in combination with two measures of data quality to filter out uninformative frames. Furthermore, we present a two-level framework based on an embedded hidden Markov model (EHHM) to incorporate the proposed quality assessment algorithm into a complete, automated diagnostic image analysis system for colonoscopy video.

  7. Automated Video Based Facial Expression Analysis of Neuropsychiatric Disorders

    PubMed Central

    Wang, Peng; Barrett, Frederick; Martin, Elizabeth; Milanova, Marina; Gur, Raquel E.; Gur, Ruben C.; Kohler, Christian; Verma, Ragini

    2008-01-01

    Deficits in emotional expression are prominent in several neuropsychiatric disorders, including schizophrenia. Available clinical facial expression evaluations provide subjective and qualitative measurements, which are based on static 2D images that do not capture the temporal dynamics and subtleties of expression changes. Therefore, there is a need for automated, objective and quantitative measurements of facial expressions captured using videos. This paper presents a computational framework that creates probabilistic expression profiles for video data and can potentially help to automatically quantify emotional expression differences between patients with neuropsychiatric disorders and healthy controls. Our method automatically detects and tracks facial landmarks in videos, and then extracts geometric features to characterize facial expression changes. To analyze temporal facial expression changes, we employ probabilistic classifiers that analyze facial expressions in individual frames, and then propagate the probabilities throughout the video to capture the temporal characteristics of facial expressions. The applications of our method to healthy controls and case studies of patients with schizophrenia and Asperger’s syndrome demonstrate the capability of the video-based expression analysis method in capturing subtleties of facial expression. Such results can pave the way for a video based method for quantitative analysis of facial expressions in clinical research of disorders that cause affective deficits. PMID:18045693

  8. Packetized Video On MAGNET

    NASA Astrophysics Data System (ADS)

    Lazar, Aurel A.; White, John S.

    1987-07-01

    Theoretical analysis of integrated local area network model of MAGNET, an integrated network testbed developed at Columbia University, shows that the bandwidth freed up during video and voice calls during periods of little movement in the images and periods of silence in the speech signals could be utilized efficiently for graphics and data transmission. Based on these investigations, an architecture supporting adaptive protocols that are dynamicaly controlled by the requirements of a fluctuating load and changing user environment has been advanced. To further analyze the behavior of the network, a real-time packetized video system has been implemented. This system is embedded in the real-time multimedia workstation EDDY, which integrates video, voice, and data traffic flows. Protocols supporting variable-bandwidth, fixed-quality packetized video transport are described in detail.

  9. Packetized video on MAGNET

    NASA Astrophysics Data System (ADS)

    Lazar, Aurel A.; White, John S.

    1986-11-01

    Theoretical analysis of an ILAN model of MAGNET, an integrated network testbed developed at Columbia University, shows that the bandwidth freed up by video and voice calls during periods of little movement in the images and silence periods in the speech signals could be utilized efficiently for graphics and data transmission. Based on these investigations, an architecture supporting adaptive protocols that are dynamically controlled by the requirements of a fluctuating load and changing user environment has been advanced. To further analyze the behavior of the network, a real-time packetized video system has been implemented. This system is embedded in the real time multimedia workstation EDDY that integrates video, voice and data traffic flows. Protocols supporting variable bandwidth, constant quality packetized video transport are descibed in detail.

  10. Video image position determination

    DOEpatents

    Christensen, Wynn; Anderson, Forrest L.; Kortegaard, Birchard L.

    1991-01-01

    An optical beam position controller in which a video camera captures an image of the beam in its video frames, and conveys those images to a processing board which calculates the centroid coordinates for the image. The image coordinates are used by motor controllers and stepper motors to position the beam in a predetermined alignment. In one embodiment, system noise, used in conjunction with Bernoulli trials, yields higher resolution centroid coordinates.

  11. Correction of projective distortion in long-image-sequence mosaics without prior information

    NASA Astrophysics Data System (ADS)

    Yang, Chenhui; Mao, Hongwei; Abousleman, Glen; Si, Jennie

    2010-04-01

    Image mosaicking is the process of piecing together multiple video frames or still images from a moving camera to form a wide-area or panoramic view of the scene being imaged. Mosaics have widespread applications in many areas such as security surveillance, remote sensing, geographical exploration, agricultural field surveillance, virtual reality, digital video, and medical image analysis, among others. When mosaicking a large number of still images or video frames, the quality of the resulting mosaic is compromised by projective distortion. That is, during the mosaicking process, the image frames that are transformed and pasted to the mosaic become significantly scaled down and appear out of proportion with respect to the mosaic. As more frames continue to be transformed, important target information in the frames can be lost since the transformed frames become too small, which eventually leads to the inability to continue further. Some projective distortion correction techniques make use of prior information such as GPS information embedded within the image, or camera internal and external parameters. Alternatively, this paper proposes a new algorithm to reduce the projective distortion without using any prior information whatsoever. Based on the analysis of the projective distortion, we approximate the projective matrix that describes the transformation between image frames using an affine model. Using singular value decomposition, we can deduce the affine model scaling factor that is usually very close to 1. By resetting the image scale of the affine model to 1, the transformed image size remains unchanged. Even though the proposed correction introduces some error in the image matching, this error is typically acceptable and more importantly, the final mosaic preserves the original image size after transformation. We demonstrate the effectiveness of this new correction algorithm on two real-world unmanned air vehicle (UAV) sequences. The proposed method is shown to be effective and suitable for real-time implementation.

  12. Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2016-05-01

    Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.

  13. A video event trigger for high frame rate, high resolution video technology

    NASA Astrophysics Data System (ADS)

    Williams, Glenn L.

    1991-12-01

    When video replaces film the digitized video data accumulates very rapidly, leading to a difficult and costly data storage problem. One solution exists for cases when the video images represent continuously repetitive 'static scenes' containing negligible activity, occasionally interrupted by short events of interest. Minutes or hours of redundant video frames can be ignored, and not stored, until activity begins. A new, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term or short term changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pretrigger and post-trigger storage techniques are then adaptable for archiving the digital stream from only the significant video images.

  14. A video event trigger for high frame rate, high resolution video technology

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    1991-01-01

    When video replaces film the digitized video data accumulates very rapidly, leading to a difficult and costly data storage problem. One solution exists for cases when the video images represent continuously repetitive 'static scenes' containing negligible activity, occasionally interrupted by short events of interest. Minutes or hours of redundant video frames can be ignored, and not stored, until activity begins. A new, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term or short term changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pretrigger and post-trigger storage techniques are then adaptable for archiving the digital stream from only the significant video images.

  15. Portable Video/Digital Retinal Funduscope

    NASA Technical Reports Server (NTRS)

    Taylor, Gerald R.; Meehan, Richard; Hunter, Norwood; Caputo, Michael; Gibson, C. Robert

    1991-01-01

    Lightweight, inexpensive electronic and photographic instrument developed for detection, monitoring, and objective quantification of ocular/systemic disease or physiological alterations of retina, blood vessels, or other structures in anterior and posterior chambers of eye. Operated with little training. Functions with human or animal subject seated, recumbent, inverted, or in almost any other orientation; and in hospital, laboratory, field, or other environment. Produces video images viewed directly and/or digitized for simultaneous or subsequent analysis. Also equipped to produce photographs and/or fitted with adaptors to produce stereoscopic or magnified images of skin, nose, ear, throat, or mouth to detect lesions or diseases.

  16. Video library for video imaging detection at intersection stop lines.

    DOT National Transportation Integrated Search

    2010-04-01

    The objective of this activity was to record video that could be used for controlled : evaluation of video image vehicle detection system (VIVDS) products and software upgrades to : existing products based on a list of conditions that might be diffic...

  17. Does Instructor's Image Size in Video Lectures Affect Learning Outcomes?

    ERIC Educational Resources Information Center

    Pi, Z.; Hong, J.; Yang, J.

    2017-01-01

    One of the most commonly used forms of video lectures is a combination of an instructor's image and accompanying lecture slides as a picture-in-picture. As the image size of the instructor varies significantly across video lectures, and so do the learning outcomes associated with this technology, the influence of the instructor's image size should…

  18. Platform for intraoperative analysis of video streams

    NASA Astrophysics Data System (ADS)

    Clements, Logan; Galloway, Robert L., Jr.

    2004-05-01

    Interactive, image-guided surgery (IIGS) has proven to increase the specificity of a variety of surgical procedures. However, current IIGS systems do not compensate for changes that occur intraoperatively and are not reflected in preoperative tomograms. Endoscopes and intraoperative ultrasound, used in minimally invasive surgery, provide real-time (RT) information in a surgical setting. Combining the information from RT imaging modalities with traditional IIGS techniques will further increase surgical specificity by providing enhanced anatomical information. In order to merge these techniques and obtain quantitative data from RT imaging modalities, a platform was developed to allow both the display and processing of video streams in RT. Using a Bandit-II CV frame grabber board (Coreco Imaging, St. Laurent, Quebec) and the associated library API, a dynamic link library was created in Microsoft Visual C++ 6.0 such that the platform could be incorporated into the IIGS system developed at Vanderbilt University. Performance characterization, using two relatively inexpensive host computers, has shown the platform capable of performing simple image processing operations on frames captured from a CCD camera and displaying the processed video data at near RT rates both independent of and while running the IIGS system.

  19. Probabilistic image modeling with an extended chain graph for human activity recognition and image segmentation.

    PubMed

    Zhang, Lei; Zeng, Zhi; Ji, Qiang

    2011-09-01

    Chain graph (CG) is a hybrid probabilistic graphical model (PGM) capable of modeling heterogeneous relationships among random variables. So far, however, its application in image and video analysis is very limited due to lack of principled learning and inference methods for a CG of general topology. To overcome this limitation, we introduce methods to extend the conventional chain-like CG model to CG model with more general topology and the associated methods for learning and inference in such a general CG model. Specifically, we propose techniques to systematically construct a generally structured CG, to parameterize this model, to derive its joint probability distribution, to perform joint parameter learning, and to perform probabilistic inference in this model. To demonstrate the utility of such an extended CG, we apply it to two challenging image and video analysis problems: human activity recognition and image segmentation. The experimental results show improved performance of the extended CG model over the conventional directed or undirected PGMs. This study demonstrates the promise of the extended CG for effective modeling and inference of complex real-world problems.

  20. Development of Dynamic Spatial Video Camera (DSVC) for 4D observation, analysis and modeling of human body locomotion.

    PubMed

    Suzuki, Naoki; Hattori, Asaki; Hayashibe, Mitsuhiro; Suzuki, Shigeyuki; Otake, Yoshito

    2003-01-01

    We have developed an imaging system for free and quantitative observation of human locomotion in a time-spatial domain by way of real time imaging. The system is equipped with 60 computer controlled video cameras to film human locomotion from all angles simultaneously. Images are installed into the main graphic workstation and translated into a 2D image matrix. Observation of the subject from optional directions is able to be performed by selecting the view point from the optimum image sequence in this image matrix. This system also possesses a function to reconstruct 4D models of the subject's moving human body by using 60 images taken from all directions at one particular time. And this system also has the capability to visualize inner structures such as the skeletal or muscular systems of the subject by compositing computer graphics reconstructed from the MRI data set. We are planning to apply this imaging system to clinical observation in the area of orthopedics, rehabilitation and sports science.

  1. Potential usefulness of a video printer for producing secondary images from digitized chest radiographs

    NASA Astrophysics Data System (ADS)

    Nishikawa, Robert M.; MacMahon, Heber; Doi, Kunio; Bosworth, Eric

    1991-05-01

    Communication between radiologists and clinicians could be improved if a secondary image (copy of the original image) accompanied the radiologic report. In addition, the number of lost original radiographs could be decreased, since clinicians would have less need to borrow films. The secondary image should be simple and inexpensive to produce, while providing sufficient image quality for verification of the diagnosis. We are investigating the potential usefulness of a video printer for producing copies of radiographs, i.e. images printed on thermal paper. The video printer we examined (Seikosha model VP-3500) can provide 64 shades of gray. It is capable of recording images up to 1,280 pixels by 1,240 lines and can accept any raster-type video signal. The video printer was characterized in terms of its linearity, contrast, latitude, resolution, and noise properties. The quality of video-printer images was also evaluated in an observer study using portable chest radiographs. We found that observers could confirm up to 90 of the reported findings in the thorax using video- printer images, when the original radiographs were of high quality. The number of verified findings was diminished when high spatial resolution was required (e.g. detection of a subtle pneumothorax) or when a low-contrast finding was located in the mediastinal area or below the diaphragm (e.g. nasogastric tubes).

  2. Real-time UAV trajectory generation using feature points matching between video image sequences

    NASA Astrophysics Data System (ADS)

    Byun, Younggi; Song, Jeongheon; Han, Dongyeob

    2017-09-01

    Unmanned aerial vehicles (UAVs), equipped with navigation systems and video capability, are currently being deployed for intelligence, reconnaissance and surveillance mission. In this paper, we present a systematic approach for the generation of UAV trajectory using a video image matching system based on SURF (Speeded up Robust Feature) and Preemptive RANSAC (Random Sample Consensus). Video image matching to find matching points is one of the most important steps for the accurate generation of UAV trajectory (sequence of poses in 3D space). We used the SURF algorithm to find the matching points between video image sequences, and removed mismatching by using the Preemptive RANSAC which divides all matching points to outliers and inliers. The inliers are only used to determine the epipolar geometry for estimating the relative pose (rotation and translation) between image sequences. Experimental results from simulated video image sequences showed that our approach has a good potential to be applied to the automatic geo-localization of the UAVs system

  3. Researching on the process of remote sensing video imagery

    NASA Astrophysics Data System (ADS)

    Wang, He-rao; Zheng, Xin-qi; Sun, Yi-bo; Jia, Zong-ren; Wang, He-zhan

    Unmanned air vehicle remotely-sensed imagery on the low-altitude has the advantages of higher revolution, easy-shooting, real-time accessing, etc. It's been widely used in mapping , target identification, and other fields in recent years. However, because of conditional limitation, the video images are unstable, the targets move fast, and the shooting background is complex, etc., thus it is difficult to process the video images in this situation. In other fields, especially in the field of computer vision, the researches on video images are more extensive., which is very helpful for processing the remotely-sensed imagery on the low-altitude. Based on this, this paper analyzes and summarizes amounts of video image processing achievement in different fields, including research purposes, data sources, and the pros and cons of technology. Meantime, this paper explores the technology methods more suitable for low-altitude video image processing of remote sensing.

  4. Extended image differencing for change detection in UAV video mosaics

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang; Schumann, Arne

    2014-03-01

    Change detection is one of the most important tasks when using unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. We address changes of short time scale, i.e. the observations are taken in time distances from several minutes up to a few hours. Each observation is a short video sequence acquired by the UAV in near-nadir view and the relevant changes are, e.g., recently parked or moved vehicles. In this paper we extend our previous approach of image differencing for single video frames to video mosaics. A precise image-to-image registration combined with a robust matching approach is needed to stitch the video frames to a mosaic. Additionally, this matching algorithm is applied to mosaic pairs in order to align them to a common geometry. The resulting registered video mosaic pairs are the input of the change detection procedure based on extended image differencing. A change mask is generated by an adaptive threshold applied to a linear combination of difference images of intensity and gradient magnitude. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed size of shadows, and compression or transmission artifacts. The special effects of video mosaicking such as geometric distortions and artifacts at moving objects have to be considered, too. In our experiments we analyze the influence of these effects on the change detection results by considering several scenes. The results show that for video mosaics this task is more difficult than for single video frames. Therefore, we extended the image registration by estimating an elastic transformation using a thin plate spline approach. The results for mosaics are comparable to that of single video frames and are useful for interactive image exploitation due to a larger scene coverage.

  5. High bit depth infrared image compression via low bit depth codecs

    NASA Astrophysics Data System (ADS)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-08-01

    Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.

  6. Feasibility of dynamic cardiac ultrasound transmission via mobile phone for basic emergency teleconsultation.

    PubMed

    Lim, Tae Ho; Choi, Hyuk Joong; Kang, Bo Seung

    2010-01-01

    We assessed the feasibility of using a camcorder mobile phone for teleconsulting about cardiac echocardiography. The diagnostic performance of evaluating left ventricle (LV) systolic function was measured by three emergency medicine physicians. A total of 138 short echocardiography video sequences (from 70 subjects) was selected from previous emergency room ultrasound examinations. The measurement of LV ejection fraction based on the transmitted video displayed on a mobile phone was compared with the original video displayed on the LCD monitor of the ultrasound machine. The image quality was evaluated using the double stimulation impairment scale (DSIS). All observers showed high sensitivity. There was an improvement in specificity with the observer's increasing experience of cardiac ultrasound. Although the image quality of video on the mobile phone was lower than that of the original, a receiver operating characteristic (ROC) analysis indicated that there was no significant difference in diagnostic performance. Immediate basic teleconsulting of echocardiography movies is possible using current commercially-available mobile phone systems.

  7. Individual differences in the processing of smoking-cessation video messages: An imaging genetics study.

    PubMed

    Shi, Zhenhao; Wang, An-Li; Aronowitz, Catherine A; Romer, Daniel; Langleben, Daniel D

    2017-09-01

    Studies testing the benefits of enriching smoking-cessation video ads with attention-grabbing sensory features have yielded variable results. Dopamine transporter gene (DAT1) has been implicated in attention deficits. We hypothesized that DAT1 polymorphism is partially responsible for this variability. Using functional magnetic resonance imaging, we examined brain responses to videos high or low in attention-grabbing features, indexed by "message sensation value" (MSV), in 53 smokers genotyped for DAT1. Compared to other smokers, 10/10 homozygotes showed greater neural response to High- vs. Low-MSV smoking-cessation videos in two a priori regions of interest: the right temporoparietal junction and the right ventrolateral prefrontal cortex. These regions are known to underlie stimulus-driven attentional processing. Exploratory analysis showed that the right temporoparietal response positively predicted follow-up smoking behavior indexed by urine cotinine. Our findings suggest that responses to attention-grabbing features in smoking-cessation messages is affected by the DAT1 genotype. Copyright © 2017. Published by Elsevier B.V.

  8. Study of atmospheric discharges caracteristics using with a standard video camera

    NASA Astrophysics Data System (ADS)

    Ferraz, E. C.; Saba, M. M. F.

    In this study is showed some preliminary statistics on lightning characteristics such as: flash multiplicity, number of ground contact points, formation of new and altered channels and presence of continuous current in the strokes that form the flash. The analysis is based on the images of a standard video camera (30 frames.s-1). The results obtained for some flashes will be compared to the images of a high-speed CCD camera (1000 frames.s-1). The camera observing site is located in São José dos Campos (23°S,46° W) at an altitude of 630m. This observational site has nearly 360° field of view at a height of 25m. It is possible to visualize distant thunderstorms occurring within a radius of 25km from the site. The room, situated over a metal structure, has water and power supplies, a telephone line and a small crane on the roof. KEY WORDS: Video images, Lightning, Multiplicity, Stroke.

  9. Blind identification of full-field vibration modes from video measurements with phase-based video motion magnification

    NASA Astrophysics Data System (ADS)

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler; Talken, Zachary; Kenyon, Garrett; Farrar, Charles; Mascareñas, David

    2017-02-01

    Experimental or operational modal analysis traditionally requires physically-attached wired or wireless sensors for vibration measurement of structures. This instrumentation can result in mass-loading on lightweight structures, and is costly and time-consuming to install and maintain on large civil structures, especially for long-term applications (e.g., structural health monitoring) that require significant maintenance for cabling (wired sensors) or periodic replacement of the energy supply (wireless sensors). Moreover, these sensors are typically placed at a limited number of discrete locations, providing low spatial sensing resolution that is hardly sufficient for modal-based damage localization, or model correlation and updating for larger-scale structures. Non-contact measurement methods such as scanning laser vibrometers provide high-resolution sensing capacity without the mass-loading effect; however, they make sequential measurements that require considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation, optical flow), video camera based measurements have been successfully used for vibration measurements and subsequent modal analysis, based on techniques such as the digital image correlation (DIC) and the point-tracking. However, they typically require speckle pattern or high-contrast markers to be placed on the surface of structures, which poses challenges when the measurement area is large or inaccessible. This work explores advanced computer vision and video processing algorithms to develop a novel video measurement and vision-based operational (output-only) modal analysis method that alleviate the need of structural surface preparation associated with existing vision-based methods and can be implemented in a relatively efficient and autonomous manner with little user supervision and calibration. First a multi-scale image processing method is applied on the frames of the video of a vibrating structure to extract the local pixel phases that encode local structural vibration, establishing a full-field spatiotemporal motion matrix. Then a high-spatial dimensional, yet low-modal-dimensional, over-complete model is used to represent the extracted full-field motion matrix using modal superposition, which is physically connected and manipulated by a family of unsupervised learning models and techniques, respectively. Thus, the proposed method is able to blindly extract modal frequencies, damping ratios, and full-field (as many points as the pixel number of the video frame) mode shapes from line of sight video measurements of the structure. The method is validated by laboratory experiments on a bench-scale building structure and a cantilever beam. Its ability for output (video measurements)-only identification and visualization of the weakly-excited mode is demonstrated and several issues with its implementation are discussed.

  10. Flight State Information Inference with Application to Helicopter Cockpit Video Data Analysis Using Data Mining Techniques

    NASA Astrophysics Data System (ADS)

    Shin, Sanghyun

    The National Transportation Safety Board (NTSB) has recently emphasized the importance of analyzing flight data as one of the most effective methods to improve eciency and safety of helicopter operations. By analyzing flight data with Flight Data Monitoring (FDM) programs, the safety and performance of helicopter operations can be evaluated and improved. In spite of the NTSB's effort, the safety of helicopter operations has not improved at the same rate as the safety of worldwide airlines, and the accident rate of helicopters continues to be much higher than that of fixed-wing aircraft. One of the main reasons is that the participation rates of the rotorcraft industry in the FDM programs are low due to the high costs of the Flight Data Recorder (FDR), the need of a special readout device to decode the FDR, anxiety of punitive action, etc. Since a video camera is easily installed, accessible, and inexpensively maintained, cockpit video data could complement the FDR in the presence of the FDR or possibly replace the role of the FDR in the absence of the FDR. Cockpit video data is composed of image and audio data: image data contains outside views through cockpit windows and activities on the flight instrument panels, whereas audio data contains sounds of the alarms within the cockpit. The goal of this research is to develop, test, and demonstrate a cockpit video data analysis algorithm based on data mining and signal processing techniques that can help better understand situations in the cockpit and the state of a helicopter by efficiently and accurately inferring the useful flight information from cockpit video data. Image processing algorithms based on data mining techniques are proposed to estimate a helicopter's attitude such as the bank and pitch angles, identify indicators from a flight instrument panel, and read the gauges and the numbers in the analogue gauge indicators and digital displays from cockpit image data. In addition, an audio processing algorithm based on signal processing and abrupt change detection techniques is proposed to identify types of warning alarms and to detect the occurrence times of individual alarms from cockpit audio data. Those proposed algorithms are then successfully applied to simulated and real helicopter cockpit video data to demonstrate and validate their performance.

  11. An adaptive enhancement algorithm for infrared video based on modified k-means clustering

    NASA Astrophysics Data System (ADS)

    Zhang, Linze; Wang, Jingqi; Wu, Wen

    2016-09-01

    In this paper, we have proposed a video enhancement algorithm to improve the output video of the infrared camera. Sometimes the video obtained by infrared camera is very dark since there is no clear target. In this case, infrared video should be divided into frame images by frame extraction, in order to carry out the image enhancement. For the first frame image, which can be divided into k sub images by using K-means clustering according to the gray interval it occupies before k sub images' histogram equalization according to the amount of information per sub image, we used a method to solve a problem that final cluster centers close to each other in some cases; and for the other frame images, their initial cluster centers can be determined by the final clustering centers of the previous ones, and the histogram equalization of each sub image will be carried out after image segmentation based on K-means clustering. The histogram equalization can make the gray value of the image to the whole gray level, and the gray level of each sub image is determined by the ratio of pixels to a frame image. Experimental results show that this algorithm can improve the contrast of infrared video where night target is not obvious which lead to a dim scene, and reduce the negative effect given by the overexposed pixels adaptively in a certain range.

  12. High-quality and small-capacity e-learning video featuring lecturer-superimposing PC screen images

    NASA Astrophysics Data System (ADS)

    Nomura, Yoshihiko; Murakami, Michinobu; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko

    2006-10-01

    Information processing and communication technology are progressing quickly, and are prevailing throughout various technological fields. Therefore, the development of such technology should respond to the needs for improvement of quality in the e-learning education system. The authors propose a new video-image compression processing system that ingeniously employs the features of the lecturing scene. While dynamic lecturing scene is shot by a digital video camera, screen images are electronically stored by a PC screen image capturing software in relatively long period at a practical class. Then, a lecturer and a lecture stick are extracted from the digital video images by pattern recognition techniques, and the extracted images are superimposed on the appropriate PC screen images by off-line processing. Thus, we have succeeded to create a high-quality and small-capacity (HQ/SC) video-on-demand educational content featuring the advantages: the high quality of image sharpness, the small electronic file capacity, and the realistic lecturer motion.

  13. Real-time differentiation of adenomatous and hyperplastic diminutive colorectal polyps during analysis of unaltered videos of standard colonoscopy using a deep learning model.

    PubMed

    Byrne, Michael F; Chapados, Nicolas; Soudan, Florian; Oertel, Clemens; Linares Pérez, Milagros; Kelly, Raymond; Iqbal, Nadeem; Chandelier, Florent; Rex, Douglas K

    2017-10-24

    In general, academic but not community endoscopists have demonstrated adequate endoscopic differentiation accuracy to make the 'resect and discard' paradigm for diminutive colorectal polyps workable. Computer analysis of video could potentially eliminate the obstacle of interobserver variability in endoscopic polyp interpretation and enable widespread acceptance of 'resect and discard'. We developed an artificial intelligence (AI) model for real-time assessment of endoscopic video images of colorectal polyps. A deep convolutional neural network model was used. Only narrow band imaging video frames were used, split equally between relevant multiclasses. Unaltered videos from routine exams not specifically designed or adapted for AI classification were used to train and validate the model. The model was tested on a separate series of 125 videos of consecutively encountered diminutive polyps that were proven to be adenomas or hyperplastic polyps. The AI model works with a confidence mechanism and did not generate sufficient confidence to predict the histology of 19 polyps in the test set, representing 15% of the polyps. For the remaining 106 diminutive polyps, the accuracy of the model was 94% (95% CI 86% to 97%), the sensitivity for identification of adenomas was 98% (95% CI 92% to 100%), specificity was 83% (95% CI 67% to 93%), negative predictive value 97% and positive predictive value 90%. An AI model trained on endoscopic video can differentiate diminutive adenomas from hyperplastic polyps with high accuracy. Additional study of this programme in a live patient clinical trial setting to address resect and discard is planned. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  14. Two-dimensional PCA-based human gait identification

    NASA Astrophysics Data System (ADS)

    Chen, Jinyan; Wu, Rongteng

    2012-11-01

    It is very necessary to recognize person through visual surveillance automatically for public security reason. Human gait based identification focus on recognizing human by his walking video automatically using computer vision and image processing approaches. As a potential biometric measure, human gait identification has attracted more and more researchers. Current human gait identification methods can be divided into two categories: model-based methods and motion-based methods. In this paper a two-Dimensional Principal Component Analysis and temporal-space analysis based human gait identification method is proposed. Using background estimation and image subtraction we can get a binary images sequence from the surveillance video. By comparing the difference of two adjacent images in the gait images sequence, we can get a difference binary images sequence. Every binary difference image indicates the body moving mode during a person walking. We use the following steps to extract the temporal-space features from the difference binary images sequence: Projecting one difference image to Y axis or X axis we can get two vectors. Project every difference image in the difference binary images sequence to Y axis or X axis difference binary images sequence we can get two matrixes. These two matrixes indicate the styles of one walking. Then Two-Dimensional Principal Component Analysis(2DPCA) is used to transform these two matrixes to two vectors while at the same time keep the maximum separability. Finally the similarity of two human gait images is calculated by the Euclidean distance of the two vectors. The performance of our methods is illustrated using the CASIA Gait Database.

  15. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  16. Using underwater video imaging as an assessment tool for coastal condition

    EPA Science Inventory

    As part of an effort to monitor ecological conditions in nearshore habitats, from 2009-2012 underwater videos were captured at over 400 locations throughout the Laurentian Great Lakes. This study focuses on developing a video rating system and assessing video images. This ratin...

  17. An electronic pan/tilt/zoom camera system

    NASA Technical Reports Server (NTRS)

    Zimmermann, Steve; Martin, H. Lee

    1991-01-01

    A camera system for omnidirectional image viewing applications that provides pan, tilt, zoom, and rotational orientation within a hemispherical field of view (FOV) using no moving parts was developed. The imaging device is based on the effect that from a fisheye lens, which produces a circular image of an entire hemispherical FOV, can be mathematically corrected using high speed electronic circuitry. An incoming fisheye image from any image acquisition source is captured in memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis. As a result, this device can accomplish the functions of pan, tilt, rotation, and zoom throughout a hemispherical FOV without the need for any mechanical mechanisms. A programmable transformation processor provides flexible control over viewing situations. Multiple images, each with different image magnifications and pan tilt rotation parameters, can be obtained from a single camera. The image transformation device can provide corrected images at frame rates compatible with RS-170 standard video equipment.

  18. Combining multi-layered bitmap files using network specific hardware

    DOEpatents

    DuBois, David H [Los Alamos, NM; DuBois, Andrew J [Santa Fe, NM; Davenport, Carolyn Connor [Los Alamos, NM

    2012-02-28

    Images and video can be produced by compositing or alpha blending a group of image layers or video layers. Increasing resolution or the number of layers results in increased computational demands. As such, the available computational resources limit the images and videos that can be produced. A computational architecture in which the image layers are packetized and streamed through processors can be easily scaled so to handle many image layers and high resolutions. The image layers are packetized to produce packet streams. The packets in the streams are received, placed in queues, and processed. For alpha blending, ingress queues receive the packetized image layers which are then z sorted and sent to egress queues. The egress queue packets are alpha blended to produce an output image or video.

  19. Trans-Pacific tele-ultrasound image transmission of fetal central nervous system structures.

    PubMed

    Ferreira, Adilson Cunha; Araujo Júnior, Edward; Martins, Wellington P; Jordão, João Francisco; Oliani, Antônio Hélio; Meagher, Simon E; Da Silva Costa, Fabricio

    2015-01-01

    To assess the quality of images and video clips of fetal central nervous (CNS) structures obtained by ultrasound and transmitted via tele-ultrasound from Brazil to Australia. In this cross-sectional study, 15 normal singleton pregnant women between 20 and 26 weeks were selected. Fetal CNS structures were obtained by images and video clips. The exams were transmitted in real-time using a broadband internet and an inexpensive video streaming device. Four blinded examiners evaluated the quality of the exams using the Likert scale. We calculated the mean, standard deviation, mean difference, and p values were obtained from paired t tests. The quality of the original video clips was slightly better than that observed by the transmitted video clips; mean difference considering all observers = 0.23 points. In 47/60 comparisons (78.3%; 95% CI = 66.4-86.9%) the quality of the video clips were judged to be the same. In 182/240 still images (75.8%; 95% CI = 70.0-80.8%) the scores of transmitted image were considered the same as the original. We demonstrated that long distance tele-ultrasound transmission of fetal CNS structures using an inexpensive video streaming device provided images of subjective good quality.

  20. Development of a ground signal processor for digital synthetic array radar data

    NASA Technical Reports Server (NTRS)

    Griffin, C. R.; Estes, J. M.

    1981-01-01

    A modified APQ-102 sidelooking array radar (SLAR) in a B-57 aircraft test bed is used, with other optical and infrared sensors, in remote sensing of Earth surface features for various users at NASA Johnson Space Center. The video from the radar is normally recorded on photographic film and subsequently processed photographically into high resolution radar images. Using a high speed sampling (digitizing) system, the two receiver channels of cross-and co-polarized video are recorded on wideband magnetic tape along with radar and platform parameters. These data are subsequently reformatted and processed into digital synthetic aperture radar images with the image data available on magnetic tape for subsequent analysis by investigators. The system design and results obtained are described.

  1. Innovative Video Diagnostic Equipment for Material Science

    NASA Technical Reports Server (NTRS)

    Capuano, G.; Titomanlio, D.; Soellner, W.; Seidel, A.

    2012-01-01

    Materials science experiments under microgravity increasingly rely on advanced optical systems to determine the physical properties of the samples under investigation. This includes video systems with high spatial and temporal resolution. The acquisition, handling, storage and transmission to ground of the resulting video data are very challenging. Since the available downlink data rate is limited, the capability to compress the video data significantly without compromising the data quality is essential. We report on the development of a Digital Video System (DVS) for EML (Electro Magnetic Levitator) which provides real-time video acquisition, high compression using advanced Wavelet algorithms, storage and transmission of a continuous flow of video with different characteristics in terms of image dimensions and frame rates. The DVS is able to operate with the latest generation of high-performance cameras acquiring high resolution video images up to 4Mpixels@60 fps or high frame rate video images up to about 1000 fps@512x512pixels.

  2. Classing Schools

    ERIC Educational Resources Information Center

    McCandless, Trevor

    2015-01-01

    School prospectuses and promotional videos appeal to parents by presenting idealised images of the education a school provides. These educational idealisations visually realise the form of discipline a school is expected to provide, depending on the social habitus of the parents. This paper presents a content analysis of the images used in 33 sets…

  3. Seafloor video footage and still-frame grabs from U.S. Geological Survey cruises in Hawaiian nearshore waters

    USGS Publications Warehouse

    Gibbs, Ann E.; Cochran, Susan A.; Tierney, Peter W.

    2013-01-01

    Underwater video footage was collected in nearshore waters (<60-meter depth) off the Hawaiian Islands from 2002 to 2011 as part of the U.S. Geological Survey (USGS) Coastal and Marine Geology Program's Pacific Coral Reef Project, to improve seafloor characterization and for the development and ground-truthing of benthic-habitat maps. This report includes nearly 53 hours of digital underwater video footage collected during four USGS cruises and more than 10,200 still images extracted from the videos, including still frames from every 10 seconds along transect lines, and still frames showing both an overview and a near-bottom view from fixed stations. Environmental Systems Research Institute (ESRI) shapefiles of individual video and still-image locations, and Google Earth kml files with explanatory text and links to the video and still images, are included. This report documents the various camera systems and methods used to collect the videos, and the techniques and software used to convert the analog video tapes into digital data in order to process the images for optimum viewing and to extract the still images, along with a brief summary of each survey cruise.

  4. Method and apparatus for reading meters from a video image

    DOEpatents

    Lewis, Trevor J.; Ferguson, Jeffrey J.

    1997-01-01

    A method and system to enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relatively non-intrusive manner without making any complicated or expensive electronic connections, and without requiring intensive manpower.

  5. Cross modality registration of video and magnetic tracker data for 3D appearance and structure modeling

    NASA Astrophysics Data System (ADS)

    Sargent, Dusty; Chen, Chao-I.; Wang, Yuan-Fang

    2010-02-01

    The paper reports a fully-automated, cross-modality sensor data registration scheme between video and magnetic tracker data. This registration scheme is intended for use in computerized imaging systems to model the appearance, structure, and dimension of human anatomy in three dimensions (3D) from endoscopic videos, particularly colonoscopic videos, for cancer research and clinical practices. The proposed cross-modality calibration procedure operates this way: Before a colonoscopic procedure, the surgeon inserts a magnetic tracker into the working channel of the endoscope or otherwise fixes the tracker's position on the scope. The surgeon then maneuvers the scope-tracker assembly to view a checkerboard calibration pattern from a few different viewpoints for a few seconds. The calibration procedure is then completed, and the relative pose (translation and rotation) between the reference frames of the magnetic tracker and the scope is determined. During the colonoscopic procedure, the readings from the magnetic tracker are used to automatically deduce the pose (both position and orientation) of the scope's reference frame over time, without complicated image analysis. Knowing the scope movement over time then allows us to infer the 3D appearance and structure of the organs and tissues in the scene. While there are other well-established mechanisms for inferring the movement of the camera (scope) from images, they are often sensitive to mistakes in image analysis, error accumulation, and structure deformation. The proposed method using a magnetic tracker to establish the camera motion parameters thus provides a robust and efficient alternative for 3D model construction. Furthermore, the calibration procedure does not require special training nor use expensive calibration equipment (except for a camera calibration pattern-a checkerboard pattern-that can be printed on any laser or inkjet printer).

  6. Cross-Modal Multivariate Pattern Analysis

    PubMed Central

    Meyer, Kaspar; Kaplan, Jonas T.

    2011-01-01

    Multivariate pattern analysis (MVPA) is an increasingly popular method of analyzing functional magnetic resonance imaging (fMRI) data1-4. Typically, the method is used to identify a subject's perceptual experience from neural activity in certain regions of the brain. For instance, it has been employed to predict the orientation of visual gratings a subject perceives from activity in early visual cortices5 or, analogously, the content of speech from activity in early auditory cortices6. Here, we present an extension of the classical MVPA paradigm, according to which perceptual stimuli are not predicted within, but across sensory systems. Specifically, the method we describe addresses the question of whether stimuli that evoke memory associations in modalities other than the one through which they are presented induce content-specific activity patterns in the sensory cortices of those other modalities. For instance, seeing a muted video clip of a glass vase shattering on the ground automatically triggers in most observers an auditory image of the associated sound; is the experience of this image in the "mind's ear" correlated with a specific neural activity pattern in early auditory cortices? Furthermore, is this activity pattern distinct from the pattern that could be observed if the subject were, instead, watching a video clip of a howling dog? In two previous studies7,8, we were able to predict sound- and touch-implying video clips based on neural activity in early auditory and somatosensory cortices, respectively. Our results are in line with a neuroarchitectural framework proposed by Damasio9,10, according to which the experience of mental images that are based on memories - such as hearing the shattering sound of a vase in the "mind's ear" upon seeing the corresponding video clip - is supported by the re-construction of content-specific neural activity patterns in early sensory cortices. PMID:22105246

  7. Vixen resistin': redefining black womanhood in hip-hop music videos.

    PubMed

    Balaji, Murali

    2010-01-01

    In recent years, scholarship on Black womanhood has become more closely connected to postmodern discourses on identity and resistance, following in the footsteps of Audre Lorde's claim that identity and sexuality have emancipatory potential. However, in the post-hip-hop era, feminists and media critics have once again brought up the idea of who controls the image. The purpose of this study is to describe possible sites of self-definition by Black women in music videos while accounting for the cultural industries that reproduce and exploit Black women's sexuality. Using textual analysis and the perspective of a noted music video performer, Melyssa Ford, this study articulates the expanse of ambiguity that lies between the images of Black womanhood that bombard consumers of BET and MTV and the selfconceptualization of the women who play those roles.

  8. Imaging systems and algorithms to analyze biological samples in real-time using mobile phone microscopy.

    PubMed

    Shanmugam, Akshaya; Usmani, Mohammad; Mayberry, Addison; Perkins, David L; Holcomb, Daniel E

    2018-01-01

    Miniaturized imaging devices have pushed the boundaries of point-of-care imaging, but existing mobile-phone-based imaging systems do not exploit the full potential of smart phones. This work demonstrates the use of simple imaging configurations to deliver superior image quality and the ability to handle a wide range of biological samples. Results presented in this work are from analysis of fluorescent beads under fluorescence imaging, as well as helminth eggs and freshwater mussel larvae under white light imaging. To demonstrate versatility of the systems, real time analysis and post-processing results of the sample count and sample size are presented in both still images and videos of flowing samples.

  9. Computerised anthropomorphometric analysis of images: case report.

    PubMed

    Ventura, F; Zacheo, A; Ventura, A; Pala, A

    2004-12-02

    The personal identification of living subjects through video filmed images can occasionally be necessary, particularly in the following circumstances: (1) the need to identify unknown subjects by comparing two-dimensional images of someone of known identity with the subject. (2) The need to identify subjects taken in photographs or recorded on video camera by using a comparison with individuals of known identity. The final aim of our research was that of analysing a video clip of a bank robbery and to determine whether one of the subjects was identifiable with one of the suspects. Following the correct methodology for personal identification, the original videotape was first analysed, relating to the robbery carried out in the bank so as to study the characteristics of the criminal action and to pinpoint the best scenes for an antropomorphometrical analysis. The scene of the crime was therefore reconstructed by bringing the suspect back to the bank where the robbery took place, who was then filmed with the same closed circuit video cameras and made to assume positions as close as possible to those of the bank robber to be identified. Taking frame no. 17, points of comparable similarity were identified on the face and right ear of the perpetrator of the crime and the same points of similarity identified on the face of the suspect: right and left eyebrows, right and left eyes, "glabella", nose, mouth, chin, fold between nose and upper lip, right ear, elix, tragus,"fossetta", "conca" and lobule. After careful comparative morphometric computer analysis, it was concluded that none of the 17 points of similarity showed the same anthropomorphology (points of negative similarity). It is reasonable to sustain that 17 points of negative similarity (or non coincidental points) is sufficient to exclude the identity of the person compared with the other.

  10. An automated form of video image analysis applied to classification of movement disorders.

    PubMed

    Chang, R; Guan, L; Burne, J A

    Video image analysis is able to provide quantitative data on postural and movement abnormalities and thus has an important application in neurological diagnosis and management. The conventional techniques require patients to be videotaped while wearing markers in a highly structured laboratory environment. This restricts the utility of video in routine clinical practise. We have begun development of intelligent software which aims to provide a more flexible system able to quantify human posture and movement directly from whole-body images without markers and in an unstructured environment. The steps involved are to extract complete human profiles from video frames, to fit skeletal frameworks to the profiles and derive joint angles and swing distances. By this means a given posture is reduced to a set of basic parameters that can provide input to a neural network classifier. To test the system's performance we videotaped patients with dopa-responsive Parkinsonism and age-matched normals during several gait cycles, to yield 61 patient and 49 normal postures. These postures were reduced to their basic parameters and fed to the neural network classifier in various combinations. The optimal parameter sets (consisting of both swing distances and joint angles) yielded successful classification of normals and patients with an accuracy above 90%. This result demonstrated the feasibility of the approach. The technique has the potential to guide clinicians on the relative sensitivity of specific postural/gait features in diagnosis. Future studies will aim to improve the robustness of the system in providing accurate parameter estimates from subjects wearing a range of clothing, and to further improve discrimination by incorporating more stages of the gait cycle into the analysis.

  11. Secure Video Surveillance System Acquisition Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2009-12-04

    The SVSS Acquisition Software collects and displays video images from two cameras through a VPN, and store the images onto a collection controller. The software is configured to allow a user to enter a time window to display up to 2 1/2, hours of video review. The software collects images from the cameras at a rate of 1 image per second and automatically deletes images older than 3 hours. The software code operates in a linux environment and can be run in a virtual machine on Windows XP. The Sandia software integrates the different COTS software together to build themore » video review system.« less

  12. MIXING QUANTIFICATION BY VISUAL IMAGING ANALYSIS

    EPA Science Inventory

    This paper reports on development of a method for quantifying two measures of mixing, the scale and intensity of segregation, through flow visualization, video recording, and software analysis. This non-intrusive method analyzes a planar cross section of a flowing system from an ...

  13. VLSI-based video event triggering for image data compression

    NASA Astrophysics Data System (ADS)

    Williams, Glenn L.

    1994-02-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  14. VLSI-based Video Event Triggering for Image Data Compression

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    1994-01-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  15. ΤND: a thyroid nodule detection system for analysis of ultrasound images and videos.

    PubMed

    Keramidas, Eystratios G; Maroulis, Dimitris; Iakovidis, Dimitris K

    2012-06-01

    In this paper, we present a computer-aided-diagnosis (CAD) system prototype, named TND (Thyroid Nodule Detector), for the detection of nodular tissue in ultrasound (US) thyroid images and videos acquired during thyroid US examinations. The proposed system incorporates an original methodology that involves a novel algorithm for automatic definition of the boundaries of the thyroid gland, and a novel approach for the extraction of noise resilient image features effectively representing the textural and the echogenic properties of the thyroid tissue. Through extensive experimental evaluation on real thyroid US data, its accuracy in thyroid nodule detection has been estimated to exceed 95%. These results attest to the feasibility of the clinical application of TND, for the provision of a second more objective opinion to the radiologists by exploiting image evidences.

  16. Microsoft Kinect Visual and Depth Sensors for Breathing and Heart Rate Analysis

    PubMed Central

    Procházka, Aleš; Schätz, Martin; Vyšata, Oldřich; Vališ, Martin

    2016-01-01

    This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human–machine interaction. PMID:27367687

  17. Microsoft Kinect Visual and Depth Sensors for Breathing and Heart Rate Analysis.

    PubMed

    Procházka, Aleš; Schätz, Martin; Vyšata, Oldřich; Vališ, Martin

    2016-06-28

    This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human-machine interaction.

  18. Omniview motionless camera orientation system

    NASA Technical Reports Server (NTRS)

    Zimmermann, Steven D. (Inventor); Martin, H. Lee (Inventor)

    1999-01-01

    A device for omnidirectional image viewing providing pan-and-tilt orientation, rotation, and magnification within a hemispherical field-of-view that utilizes no moving parts. The imaging device is based on the effect that the image from a fisheye lens, which produces a circular image of at entire hemispherical field-of-view, which can be mathematically corrected using high speed electronic circuitry. More specifically, an incoming fisheye image from any image acquisition source is captured in memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis. As a result, this device can accomplish the functions of pan, tilt, rotation, and zoom throughout a hemispherical field-of-view without the need for any mechanical mechanisms. The preferred embodiment of the image transformation device can provide corrected images at real-time rates, compatible with standard video equipment. The device can be used for any application where a conventional pan-and-tilt or orientation mechanism might be considered including inspection, monitoring, surveillance, and target acquisition.

  19. Improving stop line detection using video imaging detectors.

    DOT National Transportation Integrated Search

    2010-11-01

    The Texas Department of Transportation and other state departments of transportation as well as cities : nationwide are using video detection successfully at signalized intersections. However, operational : issues with video imaging vehicle detection...

  20. Video face recognition against a watch list

    NASA Astrophysics Data System (ADS)

    Abbas, Jehanzeb; Dagli, Charlie K.; Huang, Thomas S.

    2007-10-01

    Due to a large increase in the video surveillance data recently in an effort to maintain high security at public places, we need more robust systems to analyze this data and make tasks like face recognition a realistic possibility in challenging environments. In this paper we explore a watch-list scenario where we use an appearance based model to classify query faces from low resolution videos into either a watch-list or a non-watch-list face. We then use our simple yet a powerful face recognition system to recognize the faces classified as watch-list faces. Where the watch-list includes those people that we are interested in recognizing. Our system uses simple feature machine algorithms from our previous work to match video faces against still images. To test our approach, we match video faces against a large database of still images obtained from a previous work in the field from Yahoo News over a period of time. We do this matching in an efficient manner to come up with a faster and nearly real-time system. This system can be incorporated into a larger surveillance system equipped with advanced algorithms involving anomalous event detection and activity recognition. This is a step towards more secure and robust surveillance systems and efficient video data analysis.

  1. Noncontact measurement of heart rate using facial video illuminated under natural light and signal weighted analysis.

    PubMed

    Yan, Yonggang; Ma, Xiang; Yao, Lifeng; Ouyang, Jianfei

    2015-01-01

    Non-contact and remote measurements of vital physical signals are important for reliable and comfortable physiological self-assessment. We presented a novel optical imaging-based method to measure the vital physical signals. Using a digital camera and ambient light, the cardiovascular pulse waves were extracted better from human color facial videos correctly. And the vital physiological parameters like heart rate were measured using a proposed signal-weighted analysis method. The measured HRs consistent with those measured simultaneously with reference technologies (r=0.94, p<0.001 for HR). The results show that the imaging-based method is suitable for measuring the physiological parameters, and provide a reliable and comfortable measurement mode. The study lays a physical foundation for measuring multi-physiological parameters of human noninvasively.

  2. Eustachian Tube Mucosal Inflammation Scale Validation Based on Digital Video Images.

    PubMed

    Kivekäs, Ilkka; Pöyhönen, Leena; Aarnisalo, Antti; Rautiainen, Markus; Poe, Dennis

    2015-12-01

    The most common cause for Eustachian tube dilatory dysfunction is mucosal inflammation. The aim of this study was to validate a scale for Eustachian tube mucosal inflammation, based on digital video clips obtained during diagnostic rigid endoscopy. A previously described four-step scale for grading the degree of inflammation of the mucosa of the Eustachian tube lumen was used for this validation study. A tutorial for use of the scale, including static images and 10 second video clips, was presented to 26 clinicians with various levels of experience. Each clinician then reviewed 35 short digital video samples of Eustachian tubes from patients and rated the degree of inflammation. A subset of the clinicians performed a second rating of the same video clips at a subsequent time. Statistical analysis of the ratings provided inter- and intrarater reliability scores. Twenty-six clinicians with various levels of experience rated a total of 35 videos. Thirteen clinicians rated the videos twice. The overall correlation coefficient for the rating of inflammation severity was relatively good (0.74, 95% confidence interval, 0.72-0.76). The intralevel correlation coefficient for intrarater reliability was high (0.86). For those who rated videos twice, the intralevel correlation coefficient improved after the first rating (0.73, to 0.76), but improvement was not statistically significant. The inflammation scale used for Eustachian tube mucosal inflammation is reliable and this scale can be used with a high level of consistency by clinicians with various levels of experience.

  3. [Video-based self-control in surgical teaching. A new tool in a new concept].

    PubMed

    Dahmen, U; Sänger, C; Wurst, C; Arlt, J; Wei, W; Dondorf, F; Richter, B; Settmacher, U; Dirsch, O

    2013-10-01

    Image and video-based results and process control are essential tools of a new teaching concept for conveying surgical skills. The new teaching concept integrates approved teaching principles and new media. Every performance of exercises is videotaped and the result photographically recorded. The quality of the process and result becomes accessible for an analysis by the teacher and the student/learner. The learner is instructed to perform a criteria-based self-analysis of the video and image material by themselves. The new learning concept has so far been successfully applied in seven rounds within the newly designed modular class "Intensivkurs Chirurgische Techniken" (Intensive training of surgical techniques). Result documentation and analysis via digital picture was completed by almost every student. The quality of the results was high. Interestingly the result quality did not correlate with the time needed for the exercise. The training success had a lasting effect. The new and elaborate concept improves the quality of teaching. In the long run resources for patient care should be saved when training students according to this concept prior to performing tasks in the operating theater. These resources should be allocated for further refining innovative teaching concepts.

  4. Transmission of digital images within the NTSC analog format

    DOEpatents

    Nickel, George H.

    2004-06-15

    HDTV and NTSC compatible image communication is done in a single NTSC channel bandwidth. Luminance and chrominance image data of a scene to be transmitted is obtained. The image data is quantized and digitally encoded to form digital image data in HDTV transmission format having low-resolution terms and high-resolution terms. The low-resolution digital image data terms are transformed to a voltage signal corresponding to NTSC color subcarrier modulation with retrace blanking and color bursts to form a NTSC video signal. The NTSC video signal and the high-resolution digital image data terms are then transmitted in a composite NTSC video transmission. In a NTSC receiver, the NTSC video signal is processed directly to display the scene. In a HDTV receiver, the NTSC video signal is processed to invert the color subcarrier modulation to recover the low-resolution terms, where the recovered low-resolution terms are combined with the high-resolution terms to reconstruct the scene in a high definition format.

  5. IntraFace

    PubMed Central

    De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey

    2016-01-01

    Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/. PMID:27346987

  6. IntraFace.

    PubMed

    De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey

    2015-05-01

    Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/.

  7. Capturing and displaying microscopic images used in medical diagnostics and forensic science using 4K video resolution - an application in higher education.

    PubMed

    Maier, Hans; de Heer, Gert; Ortac, Ajda; Kuijten, Jan

    2015-11-01

    To analyze, interpret and evaluate microscopic images, used in medical diagnostics and forensic science, video images for educational purposes were made with a very high resolution of 4096 × 2160 pixels (4K), which is four times as many pixels as High-Definition Video (1920 × 1080 pixels). The unprecedented high resolution makes it possible to see details that remain invisible to any other video format. The images of the specimens (blood cells, tissue sections, hair, fibre, etc.) are recorded using a 4K video camera which is attached to a light microscope. After processing, this resulted in very sharp and highly detailed images. This material was then used in education for classroom discussion. Spoken explanation by experts in the field of medical diagnostics and forensic science was also added to the high-resolution video images to make it suitable for self-study. © 2015 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.

  8. [Development of a video image system for wireless capsule endoscopes based on DSP].

    PubMed

    Yang, Li; Peng, Chenglin; Wu, Huafeng; Zhao, Dechun; Zhang, Jinhua

    2008-02-01

    A video image recorder to record video picture for wireless capsule endoscopes was designed. TMS320C6211 DSP of Texas Instruments Inc. is the core processor of this system. Images are periodically acquired from Composite Video Broadcast Signal (CVBS) source and scaled by video decoder (SAA7114H). Video data is transported from high speed buffer First-in First-out (FIFO) to Digital Signal Processor (DSP) under the control of Complex Programmable Logic Device (CPLD). This paper adopts JPEG algorithm for image coding, and the compressed data in DSP was stored to Compact Flash (CF) card. TMS320C6211 DSP is mainly used for image compression and data transporting. Fast Discrete Cosine Transform (DCT) algorithm and fast coefficient quantization algorithm are used to accelerate operation speed of DSP and decrease the executing code. At the same time, proper address is assigned for each memory, which has different speed;the memory structure is also optimized. In addition, this system uses plenty of Extended Direct Memory Access (EDMA) to transport and process image data, which results in stable and high performance.

  9. Performance analysis of medical video streaming over mobile WiMAX.

    PubMed

    Alinejad, Ali; Philip, N; Istepanian, R H

    2010-01-01

    Wireless medical ultrasound streaming is considered one of the emerging application within the broadband mobile healthcare domain. These applications are considered as bandwidth demanding services that required high data rates with acceptable diagnostic quality of the transmitted medical images. In this paper, we present the performance analysis of a medical ultrasound video streaming acquired via special robotic ultrasonography system over emulated WiMAX wireless network. The experimental set-up of this application is described together with the performance of the relevant medical quality of service (m-QoS) metrics.

  10. Application of TrackEye in equine locomotion research.

    PubMed

    Drevemo, S; Roepstorff, L; Kallings, P; Johnston, C J

    1993-01-01

    TrackEye is an analysis system, which is applicable for equine biokinematic studies. It covers the whole process from digitizing of images, automatic target tracking and analysis. Key components in the system are an image work station for processing of video images and a high-resolution film-to-video scanner for 16-mm film. A recording module controls the input device and handles the capture of image sequences into a videodisc system, and a tracking module is able to follow reference markers automatically. The system offers a flexible analysis including calculations of markers displacements, distances and joint angles, velocities and accelerations. TrackEye was used to study effects of phenylbutazone on the fetlock and carpal joint angle movements in a horse with a mild lameness caused by osteo-arthritis in the fetlock joint of a forelimb. Significant differences, most evident before treatment, were observed in the minimum fetlock and carpal joint angles when contralateral limbs were compared (p < 0.001). The minimum fetlock angle and the minimum carpal joint angle were significantly greater in the lame limb before treatment compared to those 6, 37 and 49 h after the last treatment (p < 0.001).

  11. Photographic Analysis Technique for Assessing External Tank Foam Loss Events

    NASA Technical Reports Server (NTRS)

    Rieckhoff, T. J.; Covan, M.; OFarrell, J. M.

    2001-01-01

    A video camera and recorder were placed inside the solid rocket booster forward skirt in order to view foam loss events over an area on the external tank (ET) intertank surface. In this Technical Memorandum, a method of processing video images to allow rapid detection of permanent changes indicative of foam loss events on the ET surface was defined and applied to accurately count, categorize, and locate such events.

  12. Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video

    NASA Astrophysics Data System (ADS)

    Li, Honggui

    2017-09-01

    This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.

  13. Enhanced video indirect ophthalmoscopy (VIO) via robust mosaicing.

    PubMed

    Estrada, Rolando; Tomasi, Carlo; Cabrera, Michelle T; Wallace, David K; Freedman, Sharon F; Farsiu, Sina

    2011-10-01

    Indirect ophthalmoscopy (IO) is the standard of care for evaluation of the neonatal retina. When recorded on video from a head-mounted camera, IO images have low quality and narrow Field of View (FOV). We present an image fusion methodology for converting a video IO recording into a single, high quality, wide-FOV mosaic that seamlessly blends the best frames in the video. To this end, we have developed fast and robust algorithms for automatic evaluation of video quality, artifact detection and removal, vessel mapping, registration, and multi-frame image fusion. Our experiments show the effectiveness of the proposed methods.

  14. Real-time high-level video understanding using data warehouse

    NASA Astrophysics Data System (ADS)

    Lienard, Bruno; Desurmont, Xavier; Barrie, Bertrand; Delaigle, Jean-Francois

    2006-02-01

    High-level Video content analysis such as video-surveillance is often limited by computational aspects of automatic image understanding, i.e. it requires huge computing resources for reasoning processes like categorization and huge amount of data to represent knowledge of objects, scenarios and other models. This article explains how to design and develop a "near real-time adaptive image datamart", used, as a decisional support system for vision algorithms, and then as a mass storage system. Using RDF specification as storing format of vision algorithms meta-data, we can optimise the data warehouse concepts for video analysis, add some processes able to adapt the current model and pre-process data to speed-up queries. In this way, when new data is sent from a sensor to the data warehouse for long term storage, using remote procedure call embedded in object-oriented interfaces to simplified queries, they are processed and in memory data-model is updated. After some processing, possible interpretations of this data can be returned back to the sensor. To demonstrate this new approach, we will present typical scenarios applied to this architecture such as people tracking and events detection in a multi-camera network. Finally we will show how this system becomes a high-semantic data container for external data-mining.

  15. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center, atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  16. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  17. Qualitative and quantitative assessment of video transmitted by DVTS (digital video transport system) in surgical telemedicine.

    PubMed

    Shima, Yoichiro; Suwa, Akina; Gomi, Yuichiro; Nogawa, Hiroki; Nagata, Hiroshi; Tanaka, Hiroshi

    2007-01-01

    Real-time video pictures can be transmitted inexpensively via a broadband connection using the DVTS (digital video transport system). However, the degradation of video pictures transmitted by DVTS has not been sufficiently evaluated. We examined the application of DVTS to remote consultation by using images of laparoscopic and endoscopic surgeries. A subjective assessment by the double stimulus continuous quality scale (DSCQS) method of the transmitted video pictures was carried out by eight doctors. Three of the four video recordings were assessed as being transmitted with no degradation in quality. None of the doctors noticed any degradation in the images due to encryption by the VPN (virtual private network) system. We also used an automatic picture quality assessment system to make an objective assessment of the same images. The objective DSCQS values were similar to the subjective ones. We conclude that although the quality of video pictures transmitted by the DVTS was slightly reduced, they were useful for clinical purposes. Encryption with a VPN did not degrade image quality.

  18. An effective and robust method for tracking multiple fish in video image based on fish head detection.

    PubMed

    Qian, Zhi-Ming; Wang, Shuo Hong; Cheng, Xi En; Chen, Yan Qiu

    2016-06-23

    Fish tracking is an important step for video based analysis of fish behavior. Due to severe body deformation and mutual occlusion of multiple swimming fish, accurate and robust fish tracking from video image sequence is a highly challenging problem. The current tracking methods based on motion information are not accurate and robust enough to track the waving body and handle occlusion. In order to better overcome these problems, we propose a multiple fish tracking method based on fish head detection. The shape and gray scale characteristics of the fish image are employed to locate the fish head position. For each detected fish head, we utilize the gray distribution of the head region to estimate the fish head direction. Both the position and direction information from fish detection are then combined to build a cost function of fish swimming. Based on the cost function, global optimization method can be applied to associate the target between consecutive frames. Results show that our method can accurately detect the position and direction information of fish head, and has a good tracking performance for dozens of fish. The proposed method can successfully obtain the motion trajectories for dozens of fish so as to provide more precise data to accommodate systematic analysis of fish behavior.

  19. Method and apparatus for reading meters from a video image

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, T.J.; Ferguson, J.J.

    1995-12-31

    A method and system enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relatively non-intrusivemore » manner without making any complicated or expensive electronic connections, and without requiring intensive manpower.« less

  20. Method and apparatus for reading meters from a video image

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, T.J.; Ferguson, J.J.

    1997-09-30

    A method and system to enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relativelymore » non-intrusive manner without making any complicated or expensive electronic connections, and without requiring intensive manpower. 1 fig.« less

  1. Optimal JPWL Forward Error Correction Rate Allocation for Robust JPEG 2000 Images and Video Streaming over Mobile Ad Hoc Networks

    NASA Astrophysics Data System (ADS)

    Agueh, Max; Diouris, Jean-François; Diop, Magaye; Devaux, François-Olivier; De Vleeschouwer, Christophe; Macq, Benoit

    2008-12-01

    Based on the analysis of real mobile ad hoc network (MANET) traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC) rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS) to wireless clients is demonstrated.

  2. Probabilistic Methods for Image Generation and Encoding.

    DTIC Science & Technology

    1993-10-15

    video and graphics lab at Georgia Tech, linking together Silicon Graphics workstations, a laser video recorder, a Betacam video recorder, scanner...computer laboratory at Georgia Tech, based on two Silicon Graphics Personal Iris workstations, a SONY laser video recorder, a SONY Betacam SP video...laser disk in component RGB form, with variable speed playback. From the laser recorder the images can be dubbed to the Betacam or the VHS recorder in

  3. A fuzzy measure approach to motion frame analysis for scene detection. M.S. Thesis - Houston Univ.

    NASA Technical Reports Server (NTRS)

    Leigh, Albert B.; Pal, Sankar K.

    1992-01-01

    This paper addresses a solution to the problem of scene estimation of motion video data in the fuzzy set theoretic framework. Using fuzzy image feature extractors, a new algorithm is developed to compute the change of information in each of two successive frames to classify scenes. This classification process of raw input visual data can be used to establish structure for correlation. The algorithm attempts to fulfill the need for nonlinear, frame-accurate access to video data for applications such as video editing and visual document archival/retrieval systems in multimedia environments.

  4. Video-to-film color-image recorder.

    NASA Technical Reports Server (NTRS)

    Montuori, J. S.; Carnes, W. R.; Shim, I. H.

    1973-01-01

    A precision video-to-film recorder for use in image data processing systems, being developed for NASA, will convert three video input signals (red, blue, green) into a single full-color light beam for image recording on color film. Argon ion and krypton lasers are used to produce three spectral lines which are independently modulated by the appropriate video signals, combined into a single full-color light beam, and swept over the recording film in a raster format for image recording. A rotating multi-faceted spinner mounted on a translating carriage generates the raster, and an annotation head is used to record up to 512 alphanumeric characters in a designated area outside the image area.

  5. In vivo cross-sectional imaging of the phonating larynx using long-range Doppler optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Coughlan, Carolyn A.; Chou, Li-Dek; Jing, Joseph C.; Chen, Jason J.; Rangarajan, Swathi; Chang, Theodore H.; Sharma, Giriraj K.; Cho, Kyoungrai; Lee, Donghoon; Goddard, Julie A.; Chen, Zhongping; Wong, Brian J. F.

    2016-03-01

    Diagnosis and treatment of vocal fold lesions has been a long-evolving science for the otolaryngologist. Contemporary practice requires biopsy of a glottal lesion in the operating room under general anesthesia for diagnosis. Current in-office technology is limited to visualizing the surface of the vocal folds with fiber-optic or rigid endoscopy and using stroboscopic or high-speed video to infer information about submucosal processes. Previous efforts using optical coherence tomography (OCT) have been limited by small working distances and imaging ranges. Here we report the first full field, high-speed, and long-range OCT images of awake patients’ vocal folds as well as cross-sectional video and Doppler analysis of their vocal fold motions during phonation. These vertical-cavity surface-emitting laser source (VCSEL) OCT images offer depth resolved, high-resolution, high-speed, and panoramic images of both the true and false vocal folds. This technology has the potential to revolutionize in-office imaging of the larynx.

  6. Image and Video Compression with VLSI Neural Networks

    NASA Technical Reports Server (NTRS)

    Fang, W.; Sheu, B.

    1993-01-01

    An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.

  7. Application of robust face recognition in video surveillance systems

    NASA Astrophysics Data System (ADS)

    Zhang, De-xin; An, Peng; Zhang, Hao-xiang

    2018-03-01

    In this paper, we propose a video searching system that utilizes face recognition as searching indexing feature. As the applications of video cameras have great increase in recent years, face recognition makes a perfect fit for searching targeted individuals within the vast amount of video data. However, the performance of such searching depends on the quality of face images recorded in the video signals. Since the surveillance video cameras record videos without fixed postures for the object, face occlusion is very common in everyday video. The proposed system builds a model for occluded faces using fuzzy principal component analysis (FPCA), and reconstructs the human faces with the available information. Experimental results show that the system has very high efficiency in processing the real life videos, and it is very robust to various kinds of face occlusions. Hence it can relieve people reviewers from the front of the monitors and greatly enhances the efficiency as well. The proposed system has been installed and applied in various environments and has already demonstrated its power by helping solving real cases.

  8. Feasibility of video codec algorithms for software-only playback

    NASA Astrophysics Data System (ADS)

    Rodriguez, Arturo A.; Morse, Ken

    1994-05-01

    Software-only video codecs can provide good playback performance in desktop computers with a 486 or 68040 CPU running at 33 MHz without special hardware assistance. Typically, playback of compressed video can be categorized into three tasks: the actual decoding of the video stream, color conversion, and the transfer of decoded video data from system RAM to video RAM. By current standards, good playback performance is the decoding and display of video streams of 320 by 240 (or larger) compressed frames at 15 (or greater) frames-per- second. Software-only video codecs have evolved by modifying and tailoring existing compression methodologies to suit video playback in desktop computers. In this paper we examine the characteristics used to evaluate software-only video codec algorithms, namely: image fidelity (i.e., image quality), bandwidth (i.e., compression) ease-of-decoding (i.e., playback performance), memory consumption, compression to decompression asymmetry, scalability, and delay. We discuss the tradeoffs among these variables and the compromises that can be made to achieve low numerical complexity for software-only playback. Frame- differencing approaches are described since software-only video codecs typically employ them to enhance playback performance. To complement other papers that appear in this session of the Proceedings, we review methods derived from binary pattern image coding since these methods are amenable for software-only playback. In particular, we introduce a novel approach called pixel distribution image coding.

  9. Automated management for pavement inspection system (AMPIS)

    NASA Astrophysics Data System (ADS)

    Chung, Hung Chi; Girardello, Roberto; Soeller, Tony; Shinozuka, Masanobu

    2003-08-01

    An automated in-situ road surface distress surveying and management system, AMPIS, has been developed on the basis of video images within the framework of GIS software. Video image processing techniques are introduced to acquire, process and analyze the road surface images obtained from a moving vehicle. ArcGIS platform is used to integrate the routines of image processing and spatial analysis in handling the full-scale metropolitan highway surface distress detection and data fusion/management. This makes it possible to present user-friendly interfaces in GIS and to provide efficient visualizations of surveyed results not only for the use of transportation engineers to manage road surveying documentations, data acquisition, analysis and management, but also for financial officials to plan maintenance and repair programs and further evaluate the socio-economic impacts of highway degradation and deterioration. A review performed in this study on fundamental principle of Pavement Management System (PMS) and its implementation indicates that the proposed approach of using GIS concept and its tools for PMS application will reshape PMS into a new information technology-based system providing a convenient and efficient pavement inspection and management.

  10. GIS-based automated management of highway surface crack inspection system

    NASA Astrophysics Data System (ADS)

    Chung, Hung-Chi; Shinozuka, Masanobu; Soeller, Tony; Girardello, Roberto

    2004-07-01

    An automated in-situ road surface distress surveying and management system, AMPIS, has been developed on the basis of video images within the framework of GIS software. Video image processing techniques are introduced to acquire, process and analyze the road surface images obtained from a moving vehicle. ArcGIS platform is used to integrate the routines of image processing and spatial analysis in handling the full-scale metropolitan highway surface distress detection and data fusion/management. This makes it possible to present user-friendly interfaces in GIS and to provide efficient visualizations of surveyed results not only for the use of transportation engineers to manage road surveying documentations, data acquisition, analysis and management, but also for financial officials to plan maintenance and repair programs and further evaluate the socio-economic impacts of highway degradation and deterioration. A review performed in this study on fundamental principle of Pavement Management System (PMS) and its implementation indicates that the proposed approach of using GIS concept and its tools for PMS application will reshape PMS into a new information technology-based system that can provide convenient and efficient pavement inspection and management.

  11. Holodeck: Telepresence Dome Visualization System Simulations

    NASA Technical Reports Server (NTRS)

    Hite, Nicolas

    2012-01-01

    This paper explores the simulation and consideration of different image-projection strategies for the Holodeck, a dome that will be used for highly immersive telepresence operations in future endeavors of the National Aeronautics and Space Administration (NASA). Its visualization system will include a full 360 degree projection onto the dome's interior walls in order to display video streams from both simulations and recorded video. Because humans innately trust their vision to precisely report their surroundings, the Holodeck's visualization system is crucial to its realism. This system will be rigged with an integrated hardware and software infrastructure-namely, a system of projectors that will relay with a Graphics Processing Unit (GPU) and computer to both project images onto the dome and correct warping in those projections in real-time. Using both Computer-Aided Design (CAD) and ray-tracing software, virtual models of various dome/projector geometries were created and simulated via tracking and analysis of virtual light sources, leading to the selection of two possible configurations for installation. Research into image warping and the generation of dome-ready video content was also conducted, including generation of fisheye images, distortion correction, and the generation of a reliable content-generation pipeline.

  12. Benefit from NASA

    NASA Image and Video Library

    1999-06-01

    Two scientists at NASA Marshall Space Flight Center, atmospheric scientist Paul Meyer (left) and solar physicist Dr. David Hathaway, have developed promising new software, called Video Image Stabilization and Registration (VISAR), that may help law enforcement agencies to catch criminals by improving the quality of video recorded at crime scenes, VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects; produces clearer images of moving objects; smoothes jagged edges; enhances still images; and reduces video noise of snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of Ultrasounds which are infamous for their grainy, blurred quality. It would be especially useful for tornadoes, tracking whirling objects and helping to determine the tornado's wind speed. This image shows two scientists reviewing an enhanced video image of a license plate taken from a moving automobile.

  13. Video as Character: The Use of Video Technology in Theatrical Productions.

    ERIC Educational Resources Information Center

    Trimble, Frank P.

    The use of video images, tempered with good judgment and some restraint, can serve a stage play as opposed to stealing its thunder. An experienced director of university theater productions decided to try to incorporate video images into his production of "Joseph and the Amazing Technicolor Dreamcoat." The production drew from the works…

  14. Do Stereotypic Images in Video Games Affect Attitudes and Behavior? Adolescents’ Perspectives

    PubMed Central

    Henning, Alexandra; Brenick, Alaina; Killen, Melanie; O’Connor, Alexander; Collins, Michael J.

    2015-01-01

    This study examined adolescents’ attitudes about video games along with their self-reported play frequency. Ninth and eleventh grade students (N = 361), approximately evenly divided by grade and gender, were surveyed about whether video games have stereotypic images, involve harmful consequences or affect one’s attitudes, whether game playing should be regulated by parents or the government, and whether game playing is a personal choice. Adolescents who played video games frequently showed decreased concern about the effects that games with negatively stereotyped images may have on the players’ attitudes compared to adolescents who played games infrequently or not at all. With age, adolescents were more likely to view images as negative, but were also less likely to recognize stereotypic images of females as harmful and more likely to judge video-game playing as a personal choice. The paper discusses other findings in relation to research on adolescents’ social cognitive judgments. PMID:25729336

  15. Do Stereotypic Images in Video Games Affect Attitudes and Behavior? Adolescents' Perspectives.

    PubMed

    Henning, Alexandra; Brenick, Alaina; Killen, Melanie; O'Connor, Alexander; Collins, Michael J

    This study examined adolescents' attitudes about video games along with their self-reported play frequency. Ninth and eleventh grade students (N = 361), approximately evenly divided by grade and gender, were surveyed about whether video games have stereotypic images, involve harmful consequences or affect one's attitudes, whether game playing should be regulated by parents or the government, and whether game playing is a personal choice. Adolescents who played video games frequently showed decreased concern about the effects that games with negatively stereotyped images may have on the players' attitudes compared to adolescents who played games infrequently or not at all. With age, adolescents were more likely to view images as negative, but were also less likely to recognize stereotypic images of females as harmful and more likely to judge video-game playing as a personal choice. The paper discusses other findings in relation to research on adolescents' social cognitive judgments.

  16. Evaluation of a video image detection system : final report.

    DOT National Transportation Integrated Search

    1994-05-01

    A video image detection system (VIDS) is an advanced wide-area traffic monitoring system : that processes input from a video camera. The Autoscope VIDS coupled with an information : management system was selected as the monitoring device because test...

  17. A novel no-reference objective stereoscopic video quality assessment method based on visual saliency analysis

    NASA Astrophysics Data System (ADS)

    Yang, Xinyan; Zhao, Wei; Ye, Long; Zhang, Qin

    2017-07-01

    This paper proposes a no-reference objective stereoscopic video quality assessment method with the motivation that making the effect of objective experiments close to that of subjective way. We believe that the image regions with different visual salient degree should not have the same weights when designing an assessment metric. Therefore, we firstly use GBVS algorithm to each frame pairs and separate both the left and right viewing images into the regions with strong, general and week saliency. Besides, local feature information like blockiness, zero-crossing and depth are extracted and combined with a mathematical model to calculate a quality assessment score. Regions with different salient degree are assigned with different weights in the mathematical model. Experiment results demonstrate the superiority of our method compared with the existed state-of-the-art no-reference objective Stereoscopic video quality assessment methods.

  18. Automatic textual annotation of video news based on semantic visual object extraction

    NASA Astrophysics Data System (ADS)

    Boujemaa, Nozha; Fleuret, Francois; Gouet, Valerie; Sahbi, Hichem

    2003-12-01

    In this paper, we present our work for automatic generation of textual metadata based on visual content analysis of video news. We present two methods for semantic object detection and recognition from a cross modal image-text thesaurus. These thesaurus represent a supervised association between models and semantic labels. This paper is concerned with two semantic objects: faces and Tv logos. In the first part, we present our work for efficient face detection and recogniton with automatic name generation. This method allows us also to suggest the textual annotation of shots close-up estimation. On the other hand, we were interested to automatically detect and recognize different Tv logos present on incoming different news from different Tv Channels. This work was done jointly with the French Tv Channel TF1 within the "MediaWorks" project that consists on an hybrid text-image indexing and retrieval plateform for video news.

  19. Using digital images to measure and discriminate small particles in cotton

    NASA Astrophysics Data System (ADS)

    Taylor, Robert A.; Godbey, Luther C.

    1991-02-01

    Inages from conventional video systems are being digitized in coraputers for the analysis of small trash particles in cotton. The method has been developed to automate particle counting and area measurements for bales of cotton prepared for market. Because the video output is linearly proportional to the amount of light reflected the best spectral band for optimum particle discrimination should be centered at the wavelength of maximum difference between particles and their surroundings. However due to the spectral distribution of the illumination energy and the detector sensitivity peak image performance bands were altered. Reflectance from seven mechanically cleaned cotton lint samples and trash removed were examined for spectral contrast in the wavelength range of camera sensitivity. Pixel intensity histograms from the video systent are reported for simulated trashmeter area reference samples (painted dots on panels) and for cotton containing trash to demonstrate the particle discrimination mechanism. 2.

  20. Automatic segmentation of the optic nerve head for deformation measurements in video rate optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Hidalgo-Aguirre, Maribel; Gitelman, Julian; Lesk, Mark Richard; Costantino, Santiago

    2015-11-01

    Optical coherence tomography (OCT) imaging has become a standard diagnostic tool in ophthalmology, providing essential information associated with various eye diseases. In order to investigate the dynamics of the ocular fundus, we present a simple and accurate automated algorithm to segment the inner limiting membrane in video-rate optic nerve head spectral domain (SD) OCT images. The method is based on morphological operations including a two-step contrast enhancement technique, proving to be very robust when dealing with low signal-to-noise ratio images and pathological eyes. An analysis algorithm was also developed to measure neuroretinal tissue deformation from the segmented retinal profiles. The performance of the algorithm is demonstrated, and deformation results are presented for healthy and glaucomatous eyes.

  1. 4DCAPTURE: a general purpose software package for capturing and analyzing two- and three-dimensional motion data acquired from video sequences

    NASA Astrophysics Data System (ADS)

    Walton, James S.; Hodgson, Peter; Hallamasek, Karen; Palmer, Jake

    2003-07-01

    4DVideo is creating a general purpose capability for capturing and analyzing kinematic data from video sequences in near real-time. The core element of this capability is a software package designed for the PC platform. The software ("4DCapture") is designed to capture and manipulate customized AVI files that can contain a variety of synchronized data streams -- including audio, video, centroid locations -- and signals acquired from more traditional sources (such as accelerometers and strain gauges.) The code includes simultaneous capture or playback of multiple video streams, and linear editing of the images (together with the ancilliary data embedded in the files). Corresponding landmarks seen from two or more views are matched automatically, and photogrammetric algorithms permit multiple landmarks to be tracked in two- and three-dimensions -- with or without lens calibrations. Trajectory data can be processed within the main application or they can be exported to a spreadsheet where they can be processed or passed along to a more sophisticated, stand-alone, data analysis application. Previous attempts to develop such applications for high-speed imaging have been limited in their scope, or by the complexity of the application itself. 4DVideo has devised a friendly ("FlowStack") user interface that assists the end-user to capture and treat image sequences in a natural progression. 4DCapture employs the AVI 2.0 standard and DirectX technology which effectively eliminates the file size limitations found in older applications. In early tests, 4DVideo has streamed three RS-170 video sources to disk for more than an hour without loss of data. At this time, the software can acquire video sequences in three ways: (1) directly, from up to three hard-wired cameras supplying RS-170 (monochrome) signals; (2) directly, from a single camera or video recorder supplying an NTSC (color) signal; and (3) by importing existing video streams in the AVI 1.0 or AVI 2.0 formats. The latter is particularly useful for high-speed applications where the raw images are often captured and stored by the camera before being downloaded. Provision has been made to synchronize data acquired from any combination of these video sources using audio and visual "tags." Additional "front-ends," designed for digital cameras, are anticipated.

  2. Video networking of cardiac catheterization laboratories.

    PubMed

    Tobis, J; Aharonian, V; Mansukhani, P; Kasaoka, S; Jhandyala, R; Son, R; Browning, R; Youngblood, L; Thompson, M

    1999-02-01

    The purpose of this study was to assess the feasibility and accuracy of a video telecommunication network to transmit coronary images to provide on-line interaction between personnel in a cardiac catheterization laboratory and a remote core laboratory. A telecommunication system was installed in the cardiac catheterization laboratory at Kaiser Hospital, Los Angeles, and the core laboratory at the University of California, Irvine, approximately 40 miles away. Cineangiograms, live fluoroscopy, intravascular ultrasound studies and images of the catheterization laboratory were transmitted in real time over a dedicated T1 line at 768 kilobytes/second at 15 frames/second. These cases were performed during a clinical study of angiographic guidance versus intravascular ultrasound (IVUS) guidance of stent deployment. During the cases the core laboratory performed quantitative analysis of the angiograms and ultrasound images. Selected images were then annotated and transmitted back to the catheterization laboratory to facilitate discussion during the procedure. A successful communication hookup was obtained in 39 (98%) of 40 cases. Measurements of angiographic parameters were very close between the original cinefilm and the transmitted images. Quantitative analysis of the ultrasound images showed no significant difference in any of the diameter or cross-sectional area measurements between the original ultrasound tape and the transmitted images. The telecommunication link during the interventional procedures had a significant impact in 23 (58%) of 40 cases affecting the area to be treated, the size of the inflation balloon, recognition of stent underdeployment, or the existence of disease in other areas that was not noted on the original studies. Current video telecommunication systems provide high-quality images on-line with accurate representation of cineangiograms and intravascular ultrasound images. This system had a significant impact on 58% of the cases in this small clinical trial. Telecommunication networks between hospitals and a central core laboratory may facilitate physician training and improve technical skills and judgement during interventional procedures. This project has implications for how multicenter clinical trials could be operated through telecommunication networks to ensure conformity with the protocol.

  3. Imaging systems and algorithms to analyze biological samples in real-time using mobile phone microscopy

    PubMed Central

    Mayberry, Addison; Perkins, David L.; Holcomb, Daniel E.

    2018-01-01

    Miniaturized imaging devices have pushed the boundaries of point-of-care imaging, but existing mobile-phone-based imaging systems do not exploit the full potential of smart phones. This work demonstrates the use of simple imaging configurations to deliver superior image quality and the ability to handle a wide range of biological samples. Results presented in this work are from analysis of fluorescent beads under fluorescence imaging, as well as helminth eggs and freshwater mussel larvae under white light imaging. To demonstrate versatility of the systems, real time analysis and post-processing results of the sample count and sample size are presented in both still images and videos of flowing samples. PMID:29509786

  4. Achieving real-time capsule endoscopy (CE) video visualization through panoramic imaging

    NASA Astrophysics Data System (ADS)

    Yi, Steven; Xie, Jean; Mui, Peter; Leighton, Jonathan A.

    2013-02-01

    In this paper, we mainly present a novel and real-time capsule endoscopy (CE) video visualization concept based on panoramic imaging. Typical CE videos run about 8 hours and are manually reviewed by physicians to locate diseases such as bleedings and polyps. To date, there is no commercially available tool capable of providing stabilized and processed CE video that is easy to analyze in real time. The burden on physicians' disease finding efforts is thus big. In fact, since the CE camera sensor has a limited forward looking view and low image frame rate (typical 2 frames per second), and captures very close range imaging on the GI tract surface, it is no surprise that traditional visualization method based on tracking and registration often fails to work. This paper presents a novel concept for real-time CE video stabilization and display. Instead of directly working on traditional forward looking FOV (field of view) images, we work on panoramic images to bypass many problems facing traditional imaging modalities. Methods on panoramic image generation based on optical lens principle leading to real-time data visualization will be presented. In addition, non-rigid panoramic image registration methods will be discussed.

  5. Automatic digital image analysis for identification of mitotic cells in synchronous mammalian cell cultures.

    PubMed

    Eccles, B A; Klevecz, R R

    1986-06-01

    Mitotic frequency in a synchronous culture of mammalian cells was determined fully automatically and in real time using low-intensity phase-contrast microscopy and a newvicon video camera connected to an EyeCom III image processor. Image samples, at a frequency of one per minute for 50 hours, were analyzed by first extracting the high-frequency picture components, then thresholding and probing for annular objects indicative of putative mitotic cells. Both the extraction of high-frequency components and the recognition of rings of varying radii and discontinuities employed novel algorithms. Spatial and temporal relationships between annuli were examined to discern the occurrences of mitoses, and such events were recorded in a computer data file. At present, the automatic analysis is suited for random cell proliferation rate measurements or cell cycle studies. The automatic identification of mitotic cells as described here provides a measure of the average proliferative activity of the cell population as a whole and eliminates more than eight hours of manual review per time-lapse video recording.

  6. Computer-aided light sheet flow visualization using photogrammetry

    NASA Technical Reports Server (NTRS)

    Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.

    1994-01-01

    A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and a visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) results, was chosen to interactively display the reconstructed light sheet images with the numerical surface geometry for the model or aircraft under study. The photogrammetric reconstruction technique and the image processing and computer graphics techniques and equipment are described. Results of the computer-aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images with CFD solutions in the same graphics environment is also demonstrated.

  7. Computer-Aided Light Sheet Flow Visualization

    NASA Technical Reports Server (NTRS)

    Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.

    1993-01-01

    A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) data sets, was chosen to interactively display the reconstructed light sheet images, along with the numerical surface geometry for the model or aircraft under study. A description is provided of the photogrammetric reconstruction technique, and the image processing and computer graphics techniques and equipment. Results of the computer aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images and CFD solutions in the same graphics environment is also demonstrated.

  8. Computer-aided light sheet flow visualization

    NASA Technical Reports Server (NTRS)

    Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.

    1993-01-01

    A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) data sets, was chosen to interactively display the reconstructed light sheet images, along with the numerical surface geometry for the model or aircraft under study. A description is provided of the photogrammetric reconstruction technique, and the image processing and computer graphics techniques and equipment. Results of the computer aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images and CFD solutions in the same graphics environment is also demonstrated.

  9. Non-contact Real-time heart rate measurements based on high speed circuit technology research

    NASA Astrophysics Data System (ADS)

    Wu, Jizhe; Liu, Xiaohua; Kong, Lingqin; Shi, Cong; Liu, Ming; Hui, Mei; Dong, Liquan; Zhao, Yuejin

    2015-08-01

    In recent years, morbidity and mortality of the cardiovascular or cerebrovascular disease, which threaten human health greatly, increased year by year. Heart rate is an important index of these diseases. To address this status, the paper puts forward a kind of simple structure, easy operation, suitable for large populations of daily monitoring non-contact heart rate measurement. In the method we use imaging equipment video sensitive areas. The changes of light intensity reflected through the image grayscale average. The light change is caused by changes in blood volume. We video the people face which include the sensitive areas (ROI), and use high-speed processing circuit to save the video as AVI format into memory. After processing the whole video of a period of time, we draw curve of each color channel with frame number as horizontal axis. Then get heart rate from the curve. We use independent component analysis (ICA) to restrain noise of sports interference, realized the accurate extraction of heart rate signal under the motion state. We design an algorithm, based on high-speed processing circuit, for face recognition and tracking to automatically get face region. We do grayscale average processing to the recognized image, get RGB three grayscale curves, and extract a clearer pulse wave curves through independent component analysis, and then we get the heart rate under the motion state. At last, by means of compare our system with Fingertip Pulse Oximeter, result show the system can realize a more accurate measurement, the error is less than 3 pats per minute.

  10. Particle detection, number estimation, and feature measurement in gene transfer studies: optical fractionator stereology integrated with digital image processing and analysis.

    PubMed

    King, Michael A; Scotty, Nicole; Klein, Ronald L; Meyer, Edwin M

    2002-10-01

    Assessing the efficacy of in vivo gene transfer often requires a quantitative determination of the number, size, shape, or histological visualization characteristics of biological objects. The optical fractionator has become a choice stereological method for estimating the number of objects, such as neurons, in a structure, such as a brain subregion. Digital image processing and analytic methods can increase detection sensitivity and quantify structural and/or spectral features located in histological specimens. We describe a hardware and software system that we have developed for conducting the optical fractionator process. A microscope equipped with a video camera and motorized stage and focus controls is interfaced with a desktop computer. The computer contains a combination live video/computer graphics adapter with a video frame grabber and controls the stage, focus, and video via a commercial imaging software package. Specialized macro programs have been constructed with this software to execute command sequences requisite to the optical fractionator method: defining regions of interest, positioning specimens in a systematic uniform random manner, and stepping through known volumes of tissue for interactive object identification (optical dissectors). The system affords the flexibility to work with count regions that exceed the microscope image field size at low magnifications and to adjust the parameters of the fractionator sampling to best match the demands of particular specimens and object types. Digital image processing can be used to facilitate object detection and identification, and objects that meet criteria for counting can be analyzed for a variety of morphometric and optical properties. Copyright 2002 Elsevier Science (USA)

  11. A manual for microcomputer image analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rich, P.M.; Ranken, D.M.; George, J.S.

    1989-12-01

    This manual is intended to serve three basic purposes: as a primer in microcomputer image analysis theory and techniques, as a guide to the use of IMAGE{copyright}, a public domain microcomputer program for image analysis, and as a stimulus to encourage programmers to develop microcomputer software suited for scientific use. Topics discussed include the principals of image processing and analysis, use of standard video for input and display, spatial measurement techniques, and the future of microcomputer image analysis. A complete reference guide that lists the commands for IMAGE is provided. IMAGE includes capabilities for digitization, input and output of images,more » hardware display lookup table control, editing, edge detection, histogram calculation, measurement along lines and curves, measurement of areas, examination of intensity values, output of analytical results, conversion between raster and vector formats, and region movement and rescaling. The control structure of IMAGE emphasizes efficiency, precision of measurement, and scientific utility. 18 refs., 18 figs., 2 tabs.« less

  12. Collection and Analysis of Crowd Data with Aerial, Rooftop, and Ground Views

    DTIC Science & Technology

    2014-11-10

    collected these datasets using different aircrafts. Erista 8 HL OctaCopter is a heavy-lift aerial platform capable of using high-resolution cinema ...is another high-resolution camera that is cinema grade and high quality, with the capability of capturing videos with 4K resolution at 30 frames per...292.58 Imaging Systems and Accessories Blackmagic Production Camera 4 Crowd Counting using 4K Cameras High resolution cinema grade digital video

  13. Contact-free heart rate measurement using multiple video data

    NASA Astrophysics Data System (ADS)

    Hung, Pang-Chan; Lee, Kual-Zheng; Tsai, Luo-Wei

    2013-10-01

    In this paper, we propose a contact-free heart rate measurement method by analyzing sequential images of multiple video data. In the proposed method, skin-like pixels are firstly detected from multiple video data for extracting the color features. These color features are synchronized and analyzed by independent component analysis. A representative component is finally selected among these independent component candidates to measure the HR, which achieves under 2% deviation on average compared with a pulse oximeter in the controllable environment. The advantages of the proposed method include: 1) it uses low cost and high accessibility camera device; 2) it eases users' discomfort by utilizing contact-free measurement; and 3) it achieves the low error rate and the high stability by integrating multiple video data.

  14. Still-to-video face recognition in unconstrained environments

    NASA Astrophysics Data System (ADS)

    Wang, Haoyu; Liu, Changsong; Ding, Xiaoqing

    2015-02-01

    Face images from video sequences captured in unconstrained environments usually contain several kinds of variations, e.g. pose, facial expression, illumination, image resolution and occlusion. Motion blur and compression artifacts also deteriorate recognition performance. Besides, in various practical systems such as law enforcement, video surveillance and e-passport identification, only a single still image per person is enrolled as the gallery set. Many existing methods may fail to work due to variations in face appearances and the limit of available gallery samples. In this paper, we propose a novel approach for still-to-video face recognition in unconstrained environments. By assuming that faces from still images and video frames share the same identity space, a regularized least squares regression method is utilized to tackle the multi-modality problem. Regularization terms based on heuristic assumptions are enrolled to avoid overfitting. In order to deal with the single image per person problem, we exploit face variations learned from training sets to synthesize virtual samples for gallery samples. We adopt a learning algorithm combining both affine/convex hull-based approach and regularizations to match image sets. Experimental results on a real-world dataset consisting of unconstrained video sequences demonstrate that our method outperforms the state-of-the-art methods impressively.

  15. Quantitative Spatial and Temporal Analysis of Fluorescein Angiography Dynamics in the Eye

    PubMed Central

    Hui, Flora; Nguyen, Christine T. O.; Bedggood, Phillip A.; He, Zheng; Fish, Rebecca L.; Gurrell, Rachel; Vingrys, Algis J.; Bui, Bang V.

    2014-01-01

    Purpose We describe a novel approach to analyze fluorescein angiography to investigate fluorescein flow dynamics in the rat posterior retina as well as identify abnormal areas following laser photocoagulation. Methods Experiments were undertaken in adult Long Evans rats. Using a rodent retinal camera, videos were acquired at 30 frames per second for 30 seconds following intravenous introduction of sodium fluorescein in a group of control animals (n = 14). Videos were image registered and analyzed using principle components analysis across all pixels in the field. This returns fluorescence intensity profiles from which, the half-rise (time to 50% brightness), half-fall (time for 50% decay) back to an offset (plateau level of fluorescence). We applied this analysis to video fluorescein angiography data collected 30 minutes following laser photocoagulation in a separate group of rats (n = 7). Results Pixel-by-pixel analysis of video angiography clearly delineates differences in the temporal profiles of arteries, veins and capillaries in the posterior retina. We find no difference in half-rise, half-fall or offset amongst the four quadrants (inferior, nasal, superior, temporal). We also found little difference with eccentricity. By expressing the parameters at each pixel as a function of the number of standard deviation from the average of the entire field, we could clearly identify the spatial extent of the laser injury. Conclusions This simple registration and analysis provides a way to monitor the size of vascular injury, to highlight areas of subtle vascular leakage and to quantify vascular dynamics not possible using current fluorescein angiography approaches. This can be applied in both laboratory and clinical settings for in vivo dynamic fluorescent imaging of vasculature. PMID:25365578

  16. Processing Ocean Images to Detect Large Drift Nets

    NASA Technical Reports Server (NTRS)

    Veenstra, Tim

    2009-01-01

    A computer program processes the digitized outputs of a set of downward-looking video cameras aboard an aircraft flying over the ocean. The purpose served by this software is to facilitate the detection of large drift nets that have been lost, abandoned, or jettisoned. The development of this software and of the associated imaging hardware is part of a larger effort to develop means of detecting and removing large drift nets before they cause further environmental damage to the ocean and to shores on which they sometimes impinge. The software is capable of near-realtime processing of as many as three video feeds at a rate of 30 frames per second. After a user sets the parameters of an adjustable algorithm, the software analyzes each video stream, detects any anomaly, issues a command to point a high-resolution camera toward the location of the anomaly, and, once the camera has been so aimed, issues a command to trigger the camera shutter. The resulting high-resolution image is digitized, and the resulting data are automatically uploaded to the operator s computer for analysis.

  17. Human visual system-based smoking event detection

    NASA Astrophysics Data System (ADS)

    Odetallah, Amjad D.; Agaian, Sos S.

    2012-06-01

    Human action (e.g. smoking, eating, and phoning) analysis is an important task in various application domains like video surveillance, video retrieval, human-computer interaction systems, and so on. Smoke detection is a crucial task in many video surveillance applications and could have a great impact to raise the level of safety of urban areas, public parks, airplanes, hospitals, schools and others. The detection task is challenging since there is no prior knowledge about the object's shape, texture and color. In addition, its visual features will change under different lighting and weather conditions. This paper presents a new scheme of a system for detecting human smoking events, or small smoke, in a sequence of images. In developed system, motion detection and background subtraction are combined with motion-region-saving, skin-based image segmentation, and smoke-based image segmentation to capture potential smoke regions which are further analyzed to decide on the occurrence of smoking events. Experimental results show the effectiveness of the proposed approach. As well, the developed method is capable of detecting the small smoking events of uncertain actions with various cigarette sizes, colors, and shapes.

  18. Automated Video Quality Assessment for Deep-Sea Video

    NASA Astrophysics Data System (ADS)

    Pirenne, B.; Hoeberechts, M.; Kalmbach, A.; Sadhu, T.; Branzan Albu, A.; Glotin, H.; Jeffries, M. A.; Bui, A. O. V.

    2015-12-01

    Video provides a rich source of data for geophysical analysis, often supplying detailed information about the environment when other instruments may not. This is especially true of deep-sea environments, where direct visual observations cannot be made. As computer vision techniques improve and volumes of video data increase, automated video analysis is emerging as a practical alternative to labor-intensive manual analysis. Automated techniques can be much more sensitive to video quality than their manual counterparts, so performing quality assessment before doing full analysis is critical to producing valid results.Ocean Networks Canada (ONC), an initiative of the University of Victoria, operates cabled ocean observatories that supply continuous power and Internet connectivity to a broad suite of subsea instruments from the coast to the deep sea, including video and still cameras. This network of ocean observatories has produced almost 20,000 hours of video (about 38 hours are recorded each day) and an additional 8,000 hours of logs from remotely operated vehicle (ROV) dives. We begin by surveying some ways in which deep-sea video poses challenges for automated analysis, including: 1. Non-uniform lighting: Single, directional, light sources produce uneven luminance distributions and shadows; remotely operated lighting equipment are also susceptible to technical failures. 2. Particulate noise: Turbidity and marine snow are often present in underwater video; particles in the water column can have sharper focus and higher contrast than the objects of interest due to their proximity to the light source and can also influence the camera's autofocus and auto white-balance routines. 3. Color distortion (low contrast): The rate of absorption of light in water varies by wavelength, and is higher overall than in air, altering apparent colors and lowering the contrast of objects at a distance.We also describe measures under development at ONC for detecting and mitigating these effects. These steps include filtering out unusable data, color and luminance balancing, and choosing the most appropriate image descriptors. We apply these techniques to generate automated quality assessment of video data and illustrate their utility with an example application where we perform vision-based substrate classification.

  19. Examining the effect of task on viewing behavior in videos using saliency maps

    NASA Astrophysics Data System (ADS)

    Alers, Hani; Redi, Judith A.; Heynderickx, Ingrid

    2012-03-01

    Research has shown that when viewing still images, people will look at these images in a different manner if instructed to evaluate their quality. They will tend to focus less on the main features of the image and, instead, scan the entire image area looking for clues for its level of quality. It is questionable, however, whether this finding can be extended to videos considering their dynamic nature. One can argue that when watching a video the viewer will always focus on the dynamically changing features of the video regardless of the given task. To test whether this is true, an experiment was conducted where half of the participants viewed videos with the task of quality evaluation while the other half were simply told to watch the videos as if they were watching a movie on TV or a video downloaded from the internet. The videos contained content which was degraded with compression artifacts over a wide range of quality. An eye tracking device was used to record the viewing behavior in both conditions. By comparing the behavior during each task, it was possible to observe a systematic difference in the viewing behavior which seemed to correlate to the quality of the videos.

  20. Research on compression performance of ultrahigh-definition videos

    NASA Astrophysics Data System (ADS)

    Li, Xiangqun; He, Xiaohai; Qing, Linbo; Tao, Qingchuan; Wu, Di

    2017-11-01

    With the popularization of high-definition (HD) images and videos (1920×1080 pixels and above), there are even 4K (3840×2160) television signals and 8 K (8192×4320) ultrahigh-definition videos. The demand for HD images and videos is increasing continuously, along with the increasing data volume. The storage and transmission cannot be properly solved only by virtue of the expansion capacity of hard disks and the update and improvement of transmission devices. Based on the full use of the coding standard high-efficiency video coding (HEVC), super-resolution reconstruction technology, and the correlation between the intra- and the interprediction, we first put forward a "division-compensation"-based strategy to further improve the compression performance of a single image and frame I. Then, by making use of the above thought and HEVC encoder and decoder, a video compression coding frame is designed. HEVC is used inside the frame. Last, with the super-resolution reconstruction technology, the reconstructed video quality is further improved. The experiment shows that by the proposed compression method for a single image (frame I) and video sequence here, the performance is superior to that of HEVC in a low bit rate environment.

  1. Deep Sea Gazing: Making Ship-Based Research Aboard RV Falkor Relevant and Accessible

    NASA Astrophysics Data System (ADS)

    Wiener, C.; Zykov, V.; Miller, A.; Pace, L. J.; Ferrini, V. L.; Friedman, A.

    2016-02-01

    Schmidt Ocean Institute (SOI) is a private, non-profit operating foundation established to advance the understanding of the world's oceans through technological advancement, intelligent observation, and open sharing of information. Our research vessel Falkorprovides ship time to selected scientists and supports a wide range of scientific functions, including ROV operations with live streaming capabilities. Since 2013, SOI has live streamed 55 ROV dives in high definition and recorded them onto YouTube. This has totaled over 327 hours of video which received 1,450, 461 views in 2014. SOI is one of the only research programs that makes their entire dive series available online, creating a rich collection of video data sets. In doing this, we provide an opportunity for scientists to make new discoveries in the video data that may have been missed earlier. These data sets are also available to students, allowing them to engage with real data in the classroom. SOI's video collection is also being used in a newly developed video management system, Ocean Video Lab. Telepresence-enabled research is an important component of Falkor cruises, which is exemplified by several that were conducted in 2015. This presentation will share a few case studies including an image tagging citizen science project conducted through the Squidle interface in partnership with the Australian Center for Field Robotics. Using real-time image data collected in the Timor Sea, numerous shore-based citizens created seafloor image tags that could be used by a machine learning algorithms on Falkor's high performance computer (HPC) to accomplish habitat characterization. With the use of the HPC system real-time robot tracking, image tagging, and other outreach connections were made possible, allowing scientists on board to engage with the public and build their knowledge base. The above mentioned examples will be used to demonstrate the benefits of remote data analysis and participatory engagement in science-based telepresence.

  2. Self-Image--Alien Image: A Bilateral Video Project.

    ERIC Educational Resources Information Center

    Kracsay, Susanne

    1995-01-01

    Describes a project in which Austrian and Hungarian students learned how people see each other by creating video pictures and letters of their neighbors (alien images) that were returned with corrections (self-images). Discussion includes student critiques, impressions, and misconceptions. (AEF)

  3. High-Definition Television (HDTV) Images for Earth Observations and Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Robinson, Julie A.; Holland, S. Douglas; Runco, Susan K.; Pitts, David E.; Whitehead, Victor S.; Andrefouet, Serge M.

    2000-01-01

    As part of Detailed Test Objective 700-17A, astronauts acquired Earth observation images from orbit using a high-definition television (HDTV) camcorder, Here we provide a summary of qualitative findings following completion of tests during missions STS (Space Transport System)-93 and STS-99. We compared HDTV imagery stills to images taken using payload bay video cameras, Hasselblad film camera, and electronic still camera. We also evaluated the potential for motion video observations of changes in sunlight and the use of multi-aspect viewing to image aerosols. Spatial resolution and color quality are far superior in HDTV images compared to National Television Systems Committee (NTSC) video images. Thus, HDTV provides the first viable option for video-based remote sensing observations of Earth from orbit. Although under ideal conditions, HDTV images have less spatial resolution than medium-format film cameras, such as the Hasselblad, under some conditions on orbit, the HDTV image acquired compared favorably with the Hasselblad. Of particular note was the quality of color reproduction in the HDTV images HDTV and electronic still camera (ESC) were not compared with matched fields of view, and so spatial resolution could not be compared for the two image types. However, the color reproduction of the HDTV stills was truer than colors in the ESC images. As HDTV becomes the operational video standard for Space Shuttle and Space Station, HDTV has great potential as a source of Earth-observation data. Planning for the conversion from NTSC to HDTV video standards should include planning for Earth data archiving and distribution.

  4. 2D-pattern matching image and video compression: theory, algorithms, and experiments.

    PubMed

    Alzina, Marc; Szpankowski, Wojciech; Grama, Ananth

    2002-01-01

    In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv (1977, 1978) lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.

  5. Object tracking using multiple camera video streams

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  6. An ASIC-chip for stereoscopic depth analysis in video-real-time based on visual cortical cell behavior.

    PubMed

    Wörgötter, F

    1999-10-01

    In a stereoscopic system both eyes or cameras have a slightly different view. As a consequence small variations between the projected images exist ("disparities") which are spatially evaluated in order to retrieve depth information. We will show that two related algorithmic versions can be designed which recover disparity. Both approaches are based on the comparison of filter outputs from filtering the left and the right image. The difference of the phase components between left and right filter responses encodes the disparity. One approach uses regular Gabor filters and computes the spatial phase differences in a conventional way as described already in 1988 by Sanger. Novel to this approach, however, is that we formulate it in a way which is fully compatible with neural operations in the visual cortex. The second approach uses the apparently paradoxical similarity between the analysis of visual disparities and the determination of the azimuth of a sound source. Animals determine the direction of the sound from the temporal delay between the left and right ear signals. Similarly, in our second approach we transpose the spatially defined problem of disparity analysis into the temporal domain and utilize two resonators implemented in the form of causal (electronic) filters to determine the disparity as local temporal phase differences between the left and right filter responses. This approach permits video real-time analysis of stereo image sequences (see movies at http://www.neurop.ruhr-uni-bochum.de/Real- Time-Stereo) and a FPGA-based PC-board has been developed which performs stereo-analysis at full PAL resolution in video real-time. An ASIC chip will be available in March 2000.

  7. SMART USE OF COMPUTER-AIDED SPERM ANALYSIS (CASA) TO CHARACTERIZE SPERM MOTION

    EPA Science Inventory

    Computer-aided sperm analysis (CASA) has evolved over the past fifteen years to provide an objective, practical means of measuring and characterizing the velocity and parttern of sperm motion. CASA instruments use video frame-grabber boards to capture multiple images of spermato...

  8. Applying Image Matching to Video Analysis

    DTIC Science & Technology

    2010-09-01

    image groups, classified by the background scene, are the flag, the kitchen, the telephone, the bookshelf , the title screen, the...Kitchen 136 Telephone 3 Bookshelf 81 Title Screen 10 Map 1 24 Map 2 16 command line. This implementation of a Bloom filter uses two arbitrary...with the Bookshelf images. This scene is a much closer shot than the Kitchen scene so the host occupies much of the background. Algorithms for face

  9. Heterogeneity image patch index and its application to consumer video summarization.

    PubMed

    Dang, Chinh T; Radha, Hayder

    2014-06-01

    Automatic video summarization is indispensable for fast browsing and efficient management of large video libraries. In this paper, we introduce an image feature that we refer to as heterogeneity image patch (HIP) index. The proposed HIP index provides a new entropy-based measure of the heterogeneity of patches within any picture. By evaluating this index for every frame in a video sequence, we generate a HIP curve for that sequence. We exploit the HIP curve in solving two categories of video summarization applications: key frame extraction and dynamic video skimming. Under the key frame extraction frame-work, a set of candidate key frames is selected from abundant video frames based on the HIP curve. Then, a proposed patch-based image dissimilarity measure is used to create affinity matrix of these candidates. Finally, a set of key frames is extracted from the affinity matrix using a min–max based algorithm. Under video skimming, we propose a method to measure the distance between a video and its skimmed representation. The video skimming problem is then mapped into an optimization framework and solved by minimizing a HIP-based distance for a set of extracted excerpts. The HIP framework is pixel-based and does not require semantic information or complex camera motion estimation. Our simulation results are based on experiments performed on consumer videos and are compared with state-of-the-art methods. It is shown that the HIP approach outperforms other leading methods, while maintaining low complexity.

  10. Video stereo-laparoscopy system

    NASA Astrophysics Data System (ADS)

    Xiang, Yang; Hu, Jiasheng; Jiang, Huilin

    2006-01-01

    Minimally invasive surgery (MIS) has contributed significantly to patient care by reducing the morbidity associated with more invasive procedures. MIS procedures have become standard treatment for gallbladder disease and some abdominal malignancies. The imaging system has played a major role in the evolving field of minimally invasive surgery (MIS). The image need to have good resolution, large magnification, especially, the image need to have depth cue at the same time the image have no flicker and suit brightness. The video stereo-laparoscopy system can meet the demand of the doctors. This paper introduces the 3d video laparoscopy has those characteristic, field frequency: 100Hz, the depth space: 150mm, resolution: 10pl/mm. The work principle of the system is introduced in detail, and the optical system and time-division stereo-display system are described briefly in this paper. The system has focusing image lens, it can image on the CCD chip, the optical signal can change the video signal, and through A/D switch of the image processing system become the digital signal, then display the polarized image on the screen of the monitor through the liquid crystal shutters. The doctors with the polarized glasses can watch the 3D image without flicker of the tissue or organ. The 3D video laparoscope system has apply in the MIS field and praised by doctors. Contrast to the traditional 2D video laparoscopy system, it has some merit such as reducing the time of surgery, reducing the problem of surgery and the trained time.

  11. A new colorimetrically-calibrated automated video-imaging protocol for day-night fish counting at the OBSEA coastal cabled observatory.

    PubMed

    del Río, Joaquín; Aguzzi, Jacopo; Costa, Corrado; Menesatti, Paolo; Sbragaglia, Valerio; Nogueras, Marc; Sarda, Francesc; Manuèl, Antoni

    2013-10-30

    Field measurements of the swimming activity rhythms of fishes are scant due to the difficulty of counting individuals at a high frequency over a long period of time. Cabled observatory video monitoring allows such a sampling at a high frequency over unlimited periods of time. Unfortunately, automation for the extraction of biological information (i.e., animals' visual counts per unit of time) is still a major bottleneck. In this study, we describe a new automated video-imaging protocol for the 24-h continuous counting of fishes in colorimetrically calibrated time-lapse photographic outputs, taken by a shallow water (20 m depth) cabled video-platform, the OBSEA. The spectral reflectance value for each patch was measured between 400 to 700 nm and then converted into standard RGB, used as a reference for all subsequent calibrations. All the images were acquired within a standardized Region Of Interest (ROI), represented by a 2 × 2 m methacrylate panel, endowed with a 9-colour calibration chart, and calibrated using the recently implemented "3D Thin-Plate Spline" warping approach in order to numerically define color by its coordinates in n-dimensional space. That operation was repeated on a subset of images, 500 images as a training set, manually selected since acquired under optimum visibility conditions. All images plus those for the training set were ordered together through Principal Component Analysis allowing the selection of 614 images (67.6%) out of 908 as a total corresponding to 18 days (at 30 min frequency). The Roberts operator (used in image processing and computer vision for edge detection) was used to highlights regions of high spatial colour gradient corresponding to fishes' bodies. Time series in manual and visual counts were compared together for efficiency evaluation. Periodogram and waveform analysis outputs provided very similar results, although quantified parameters in relation to the strength of respective rhythms were different. Results indicate that automation efficiency is limited by optimum visibility conditions. Data sets from manual counting present the larger day-night fluctuations in comparison to those derived from automation. This comparison indicates that the automation protocol subestimate fish numbers but it is anyway suitable for the study of community activity rhythms.

  12. A New Colorimetrically-Calibrated Automated Video-Imaging Protocol for Day-Night Fish Counting at the OBSEA Coastal Cabled Observatory

    PubMed Central

    del Río, Joaquín; Aguzzi, Jacopo; Costa, Corrado; Menesatti, Paolo; Sbragaglia, Valerio; Nogueras, Marc; Sarda, Francesc; Manuèl, Antoni

    2013-01-01

    Field measurements of the swimming activity rhythms of fishes are scant due to the difficulty of counting individuals at a high frequency over a long period of time. Cabled observatory video monitoring allows such a sampling at a high frequency over unlimited periods of time. Unfortunately, automation for the extraction of biological information (i.e., animals' visual counts per unit of time) is still a major bottleneck. In this study, we describe a new automated video-imaging protocol for the 24-h continuous counting of fishes in colorimetrically calibrated time-lapse photographic outputs, taken by a shallow water (20 m depth) cabled video-platform, the OBSEA. The spectral reflectance value for each patch was measured between 400 to 700 nm and then converted into standard RGB, used as a reference for all subsequent calibrations. All the images were acquired within a standardized Region Of Interest (ROI), represented by a 2 × 2 m methacrylate panel, endowed with a 9-colour calibration chart, and calibrated using the recently implemented “3D Thin-Plate Spline” warping approach in order to numerically define color by its coordinates in n-dimensional space. That operation was repeated on a subset of images, 500 images as a training set, manually selected since acquired under optimum visibility conditions. All images plus those for the training set were ordered together through Principal Component Analysis allowing the selection of 614 images (67.6%) out of 908 as a total corresponding to 18 days (at 30 min frequency). The Roberts operator (used in image processing and computer vision for edge detection) was used to highlights regions of high spatial colour gradient corresponding to fishes' bodies. Time series in manual and visual counts were compared together for efficiency evaluation. Periodogram and waveform analysis outputs provided very similar results, although quantified parameters in relation to the strength of respective rhythms were different. Results indicate that automation efficiency is limited by optimum visibility conditions. Data sets from manual counting present the larger day-night fluctuations in comparison to those derived from automation. This comparison indicates that the automation protocol subestimate fish numbers but it is anyway suitable for the study of community activity rhythms. PMID:24177726

  13. Multi-scale approaches for high-speed imaging and analysis of large neural populations

    PubMed Central

    Ahrens, Misha B.; Yuste, Rafael; Peterka, Darcy S.; Paninski, Liam

    2017-01-01

    Progress in modern neuroscience critically depends on our ability to observe the activity of large neuronal populations with cellular spatial and high temporal resolution. However, two bottlenecks constrain efforts towards fast imaging of large populations. First, the resulting large video data is challenging to analyze. Second, there is an explicit tradeoff between imaging speed, signal-to-noise, and field of view: with current recording technology we cannot image very large neuronal populations with simultaneously high spatial and temporal resolution. Here we describe multi-scale approaches for alleviating both of these bottlenecks. First, we show that spatial and temporal decimation techniques based on simple local averaging provide order-of-magnitude speedups in spatiotemporally demixing calcium video data into estimates of single-cell neural activity. Second, once the shapes of individual neurons have been identified at fine scale (e.g., after an initial phase of conventional imaging with standard temporal and spatial resolution), we find that the spatial/temporal resolution tradeoff shifts dramatically: after demixing we can accurately recover denoised fluorescence traces and deconvolved neural activity of each individual neuron from coarse scale data that has been spatially decimated by an order of magnitude. This offers a cheap method for compressing this large video data, and also implies that it is possible to either speed up imaging significantly, or to “zoom out” by a corresponding factor to image order-of-magnitude larger neuronal populations with minimal loss in accuracy or temporal resolution. PMID:28771570

  14. Context indexing of digital cardiac ultrasound records in PACS

    NASA Astrophysics Data System (ADS)

    Lobodzinski, S. Suave; Meszaros, Georg N.

    1998-07-01

    Recent wide adoption of the DICOM 3.0 standard by ultrasound equipment vendors created a need for practical clinical implementations of cardiac imaging study visualization, management and archiving, DICOM 3.0 defines only a logical and physical format for exchanging image data (still images, video, patient and study demographics). All DICOM compliant imaging studies must presently be archived on a 650 Mb recordable compact disk. This is a severe limitation for ultrasound applications where studies of 3 to 10 minutes long are a common practice. In addition, DICOM digital echocardiography objects require physiological signal indexing, content segmentation and characterization. Since DICOM 3.0 is an interchange standard only, it does not define how to database composite video objects. The goal of this research was therefore to address the issues of efficient storage, retrieval and management of DICOM compliant cardiac video studies in a distributed PACS environment. Our Web based implementation has the advantage of accommodating both DICOM defined entity-relation modules (equipment data, patient data, video format, etc.) in standard relational database tables and digital indexed video with its attributes in an object relational database. Object relational data model facilitates content indexing of full motion cardiac imaging studies through bi-directional hyperlink generation that tie searchable video attributes and related objects to individual video frames in the temporal domain. Benefits realized from use of bi-directionally hyperlinked data models in an object relational database include: (1) real time video indexing during image acquisition, (2) random access and frame accurate instant playback of previously recorded full motion imaging data, and (3) time savings from faster and more accurate access to data through multiple navigation mechanisms such as multidimensional queries on an index, queries on a hyperlink attribute, free search and browsing.

  15. The Portable Dynamic Fundus Instrument: Uses in telemedicine and research

    NASA Technical Reports Server (NTRS)

    Hunter, Norwood; Caputo, Michael; Billica, Roger; Taylor, Gerald; Gibson, C. Robert; Manuel, F. Keith; Mader, Thomas; Meehan, Richard

    1994-01-01

    For years ophthalmic photographs have been used to track the progression of many ocular diseases such as macular degeneration and glaucoma as well as the ocular manifestations of diabetes, hypertension, and hypoxia. In 1987 a project was initiated at the Johnson Space Center (JSC) to develop a means of monitoring retinal vascular caliber and intracranial pressure during space flight. To conduct telemedicine during space flight operations, retinal images would require real-time transmissions from space. Film-based images would not be useful during in-flight operations. Video technology is beneficial in flight because the images may be acquired, recorded, and transmitted to the ground for rapid computer digital image processing and analysis. The computer analysis techniques developed for this project detected vessel caliber changes as small as 3 percent. In the field of telemedicine, the Portable Dynamic Fundus Instrument demonstrates the concept and utility of a small, self-contained video funduscope. It was used to record retinal images during the Gulf War and to transmit retinal images from the Space Shuttle Columbia during STS-50. There are plans to utilize this device to provide a mobile ophthalmic screening service in rural Texas. In the fall of 1993 a medical team in Boulder, Colorado, will transmit real-time images of the retina during remote consultation and diagnosis. The research applications of this device include the capability of operating in remote locations or small, confined test areas. There has been interest shown utilizing retinal imaging during high-G centrifuge tests, high-altitude chamber tests, and aircraft flight tests. A new design plan has been developed to incorporate the video instrumentation into face-mounted goggle. This design would eliminate head restraint devices, thus allowing full maneuverability to the subjects. Further development of software programs will broaden the application of the Portable Dynamic Fundus Instrument in telemedicine and medical research.

  16. Complex Event Processing for Content-Based Text, Image, and Video Retrieval

    DTIC Science & Technology

    2016-06-01

    NY): Wiley- Interscience; 2000. Feldman R, Sanger J. The text mining handbook: advanced approaches in analyzing unstructured data. New York (NY...ARL-TR-7705 ● JUNE 2016 US Army Research Laboratory Complex Event Processing for Content-Based Text , Image, and Video Retrieval...ARL-TR-7705 ● JUNE 2016 US Army Research Laboratory Complex Event Processing for Content-Based Text , Image, and Video Retrieval

  17. Analytical Tools for Cloudscope Ice Measurement

    NASA Technical Reports Server (NTRS)

    Arnott, W. Patrick

    1998-01-01

    The cloudscope is a ground or aircraft instrument for viewing ice crystals impacted on a sapphire window. It is essentially a simple optical microscope with an attached compact CCD video camera whose output is recorded on a Hi-8 mm video cassette recorder equipped with digital time and date recording capability. In aircraft operation the window is at a stagnation point of the flow so adiabatic compression heats the window to sublimate the ice crystals so that later impacting crystals can be imaged as well. A film heater is used for ground based operation to provide sublimation, and it can also be used to provide extra heat for aircraft operation. The compact video camera can be focused manually by the operator, and a beam splitter - miniature bulb combination provide illumination for night operation. Several shutter speeds are available to accommodate daytime illumination conditions by direct sunlight. The video images can be directly used to qualitatively assess the crystal content of cirrus clouds and contrails. Quantitative size spectra are obtained with the tools described in this report. Selected portions of the video images are digitized using a PCI bus frame grabber to form a short movie segment or stack using NIH (National Institute of Health) Image software with custom macros developed at DRI. The stack can be Fourier transform filtered with custom, easy to design filters to reduce most objectionable video artifacts. Particle quantification of each slice of the stack is performed using digital image analysis. Data recorded for each particle include particle number and centroid, frame number in the stack, particle area, perimeter, equivalent ellipse maximum and minimum radii, ellipse angle, and pixel number. Each valid particle in the stack is stamped with a unique number. This output can be used to obtain a semiquantitative appreciation of the crystal content. The particle information becomes the raw input for a subsequent program (FORTRAN) that synthesizes each slice and separates the new from the sublimating particles. The new particle information is used to generate quantitative particle concentration, area, and mass size spectra along with total concentration, solar extinction coefficient, and ice water content. This program directly creates output in html format for viewing with a web browser.

  18. State of the art in video system performance

    NASA Technical Reports Server (NTRS)

    Lewis, Michael J.

    1990-01-01

    The closed circuit television (CCTV) system that is onboard the Space Shuttle has the following capabilities: camera, video signal switching and routing unit (VSU); and Space Shuttle video tape recorder. However, this system is inadequate for use with many experiments that require video imaging. In order to assess the state-of-the-art in video technology and data storage systems, a survey was conducted of the High Resolution, High Frame Rate Video Technology (HHVT) products. The performance of the state-of-the-art solid state cameras and image sensors, video recording systems, data transmission devices, and data storage systems versus users' requirements are shown graphically.

  19. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    NASA Astrophysics Data System (ADS)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory detection rate by using facial features and skin color model. To harness all the features in the scene, we further developed another system using multiple types of local descriptors along with Bag-of-Visual Word framework. In addition, an investigation of new contour feature in detecting obscene content is presented.

  20. Quantifying cell mono-layer cultures by video imaging.

    PubMed

    Miller, K S; Hook, L A

    1996-04-01

    A method is described in which the relative number of adherent cells in multi-well tissue-culture plates is assayed by staining the cells with Giemsa and capturing the image of the stained cells with a video camera and charged-coupled device. The resultant image is quantified using the associated video imaging software. The method is shown to be sensitive and reproducible and should be useful for studies where quantifying relative cell numbers and/or proliferation in vitro is required.

  1. Facial Attractiveness Ratings from Video-Clips and Static Images Tell the Same Story

    PubMed Central

    Rhodes, Gillian; Lie, Hanne C.; Thevaraja, Nishta; Taylor, Libby; Iredell, Natasha; Curran, Christine; Tan, Shi Qin Claire; Carnemolla, Pia; Simmons, Leigh W.

    2011-01-01

    Most of what we know about what makes a face attractive and why we have the preferences we do is based on attractiveness ratings of static images of faces, usually photographs. However, several reports that such ratings fail to correlate significantly with ratings made to dynamic video clips, which provide richer samples of appearance, challenge the validity of this literature. Here, we tested the validity of attractiveness ratings made to static images, using a substantial sample of male faces. We found that these ratings agreed very strongly with ratings made to videos of these men, despite the presence of much more information in the videos (multiple views, neutral and smiling expressions and speech-related movements). Not surprisingly, given this high agreement, the components of video-attractiveness were also very similar to those reported previously for static-attractiveness. Specifically, averageness, symmetry and masculinity were all significant components of attractiveness rated from videos. Finally, regression analyses yielded very similar effects of attractiveness on success in obtaining sexual partners, whether attractiveness was rated from videos or static images. These results validate the widespread use of attractiveness ratings made to static images in evolutionary and social psychological research. We speculate that this validity may stem from our tendency to make rapid and robust judgements of attractiveness. PMID:22096491

  2. Video and LAN solutions for a digital OR: the Varese experience

    NASA Astrophysics Data System (ADS)

    Nocco, Umberto; Cocozza, Eugenio; Sivo, Monica; Peta, Giancarlo

    2007-03-01

    Purpose: build 20 ORs equipped with independent video acquisition and broadcasting systems and a powerful LAN connectivity. Methods: a digital PC controlled video matrix has been installed in each OR. The LAN connectivity has been developed to grant data entering the OR and high speed connectivity to a server and to broadcasting devices. Video signals are broadcasted within the OR. Fixed inputs and five additional video inputs have been placed in the OR. Images can be stored locally on a high capacity HDD and a DVD recorder. Images can be also stored in a central archive for future acquisition and reference. Ethernet plugs have been placed within the OR to acquire images and data from the Hospital LAN; the OR is connected to the server/archive using a dedicated optical fiber. Results: 20 independent digital ORs have been built. Each OR is "self contained" and images can be digitally managed and broadcasted. Security issues concerning both image visualization and electrical safety have been fulfilled and each OR is fully integrated in the Hospital LAN. Conclusions: Digital ORs were fully implemented, they fulfill surgeons needs in terms of video acquisition and distribution and grant high quality video for each kind of surgery in a major hospital.

  3. Real-Time Digital Bright Field Technology for Rapid Antibiotic Susceptibility Testing.

    PubMed

    Canali, Chiara; Spillum, Erik; Valvik, Martin; Agersnap, Niels; Olesen, Tom

    2018-01-01

    Optical scanning through bacterial samples and image-based analysis may provide a robust method for bacterial identification, fast estimation of growth rates and their modulation due to the presence of antimicrobial agents. Here, we describe an automated digital, time-lapse, bright field imaging system (oCelloScope, BioSense Solutions ApS, Farum, Denmark) for rapid and higher throughput antibiotic susceptibility testing (AST) of up to 96 bacteria-antibiotic combinations at a time. The imaging system consists of a digital camera, an illumination unit and a lens where the optical axis is tilted 6.25° relative to the horizontal plane of the stage. Such tilting grants more freedom of operation at both high and low concentrations of microorganisms. When considering a bacterial suspension in a microwell, the oCelloScope acquires a sequence of 6.25°-tilted images to form an image Z-stack. The stack contains the best-focus image, as well as the adjacent out-of-focus images (which contain progressively more out-of-focus bacteria, the further the distance from the best-focus position). The acquisition process is repeated over time, so that the time-lapse sequence of best-focus images is used to generate a video. The setting of the experiment, image analysis and generation of time-lapse videos can be performed through a dedicated software (UniExplorer, BioSense Solutions ApS). The acquired images can be processed for online and offline quantification of several morphological parameters, microbial growth, and inhibition over time.

  4. A simple method for panretinal imaging with the slit lamp.

    PubMed

    Gellrich, Marcus-Matthias

    2016-12-01

    Slit lamp biomicroscopy of the retina with a convex lens is a key procedure in clinical practice. The methods presented enable ophthalmologists to adequately image large and peripheral parts of the fundus using a video-slit lamp and freely available stitching software. A routine examination of the fundus with a slit lamp and a +90 D lens is recorded on a video film. Later, sufficiently sharp still images are identified on the video sequence. These still images are imported into a freely available image-processing program (Hugin, for stitching mosaics together digitally) and corresponding points are marked on adjacent still images with some overlap. Using the digital stitching program Hugin panoramic overviews of the retina can be built which can extend to the equator. This allows to image diseases involving the whole retina or its periphery by performing a structured fundus examination with a video-slit lamp. Similar images with a video-slit lamp based on a fundus examination through a hand-held non-contact lens have not been demonstrated before. The methods presented enable those ophthalmologists without high-end imaging equipment to monitor pathological fundus findings. The suggested procedure might even be interesting for retinological departments if peripheral findings are to be documented which might be difficult with fundus cameras.

  5. Plant Chlorophyll Content Imager with Reference Detection Signals

    NASA Technical Reports Server (NTRS)

    Spiering, Bruce A. (Inventor); Carter, Gregory A. (Inventor)

    2000-01-01

    A portable plant chlorophyll imaging system is described which collects light reflected from a target plant and separates the collected light into two different wavelength bands. These wavelength bands, or channels, are described as having center wavelengths of 700 nm and 840 nm. The light collected in these two channels is processed using synchronized video cameras. A controller provided in the system compares the level of light of video images reflected from a target plant with a reference level of light from a source illuminating the plant. The percent of reflection in the two separate wavelength bands from a target plant are compared to provide a ratio video image which indicates a relative level of plant chlorophyll content and physiological stress. Multiple display modes are described for viewing the video images.

  6. Scene Analysis: Non-Linear Spatial Filtering for Automatic Target Detection.

    DTIC Science & Technology

    1982-12-01

    In this thesis, a method for two-dimensional pattern recognition was developed and tested. The method included a global search scheme for candidate...test global switch TYPEO Creating negative video file only.W 11=0 12=256 13=512 14=768 GO 70 2 1 TYPE" Creating negative and horizontally flipped video...purpose was to develop a base of image processing software for the AFIT Digital Signal Processing Laboratory NOVA- ECLIPSE minicomputer system, for

  7. Integrating TV/digital data spectrograph system

    NASA Technical Reports Server (NTRS)

    Duncan, B. J.; Fay, T. D.; Miller, E. R.; Wamsteker, W.; Brown, R. M.; Neely, P. L.

    1975-01-01

    A 25-mm vidicon camera was previously modified to allow operation in an integration mode for low-light-level astronomical work. The camera was then mated to a low-dispersion spectrograph for obtaining spectral information in the 400 to 750 nm range. A high speed digital video image system was utilized to digitize the analog video signal, place the information directly into computer-type memory, and record data on digital magnetic tape for permanent storage and subsequent analysis.

  8. A framework for the recognition of high-level surgical tasks from video images for cataract surgeries

    PubMed Central

    Lalys, Florent; Riffaud, Laurent; Bouget, David; Jannin, Pierre

    2012-01-01

    The need for a better integration of the new generation of Computer-Assisted-Surgical (CAS) systems has been recently emphasized. One necessity to achieve this objective is to retrieve data from the Operating Room (OR) with different sensors, then to derive models from these data. Recently, the use of videos from cameras in the OR has demonstrated its efficiency. In this paper, we propose a framework to assist in the development of systems for the automatic recognition of high level surgical tasks using microscope videos analysis. We validated its use on cataract procedures. The idea is to combine state-of-the-art computer vision techniques with time series analysis. The first step of the framework consisted in the definition of several visual cues for extracting semantic information, therefore characterizing each frame of the video. Five different pieces of image-based classifiers were therefore implemented. A step of pupil segmentation was also applied for dedicated visual cue detection. Time series classification algorithms were then applied to model time-varying data. Dynamic Time Warping (DTW) and Hidden Markov Models (HMM) were tested. This association combined the advantages of all methods for better understanding of the problem. The framework was finally validated through various studies. Six binary visual cues were chosen along with 12 phases to detect, obtaining accuracies of 94%. PMID:22203700

  9. Stereoscopic video analysis of Anopheles gambiae behavior in the field: challenges and opportunities

    USDA-ARS?s Scientific Manuscript database

    Advances in our ability to localize and track individual swarming mosquitoes in the field via stereoscopic image analysis have enabled us to test long standing ideas about individual male behavior and directly observe coupling. These studies further our fundamental understanding of the reproductive ...

  10. Applied learning-based color tone mapping for face recognition in video surveillance system

    NASA Astrophysics Data System (ADS)

    Yew, Chuu Tian; Suandi, Shahrel Azmin

    2012-04-01

    In this paper, we present an applied learning-based color tone mapping technique for video surveillance system. This technique can be applied onto both color and grayscale surveillance images. The basic idea is to learn the color or intensity statistics from a training dataset of photorealistic images of the candidates appeared in the surveillance images, and remap the color or intensity of the input image so that the color or intensity statistics match those in the training dataset. It is well known that the difference in commercial surveillance cameras models, and signal processing chipsets used by different manufacturers will cause the color and intensity of the images to differ from one another, thus creating additional challenges for face recognition in video surveillance system. Using Multi-Class Support Vector Machines as the classifier on a publicly available video surveillance camera database, namely SCface database, this approach is validated and compared to the results of using holistic approach on grayscale images. The results show that this technique is suitable to improve the color or intensity quality of video surveillance system for face recognition.

  11. Research of real-time video processing system based on 6678 multi-core DSP

    NASA Astrophysics Data System (ADS)

    Li, Xiangzhen; Xie, Xiaodan; Yin, Xiaoqiang

    2017-10-01

    In the information age, the rapid development in the direction of intelligent video processing, complex algorithm proposed the powerful challenge on the performance of the processor. In this article, through the FPGA + TMS320C6678 frame structure, the image to fog, merge into an organic whole, to stabilize the image enhancement, its good real-time, superior performance, break through the traditional function of video processing system is simple, the product defects such as single, solved the video application in security monitoring, video, etc. Can give full play to the video monitoring effectiveness, improve enterprise economic benefits.

  12. Video conference quality assessment based on cooperative sensing of video and audio

    NASA Astrophysics Data System (ADS)

    Wang, Junxi; Chen, Jialin; Tian, Xin; Zhou, Cheng; Zhou, Zheng; Ye, Lu

    2015-12-01

    This paper presents a method to video conference quality assessment, which is based on cooperative sensing of video and audio. In this method, a proposed video quality evaluation method is used to assess the video frame quality. The video frame is divided into noise image and filtered image by the bilateral filters. It is similar to the characteristic of human visual, which could also be seen as a low-pass filtering. The audio frames are evaluated by the PEAQ algorithm. The two results are integrated to evaluate the video conference quality. A video conference database is built to test the performance of the proposed method. It could be found that the objective results correlate well with MOS. Then we can conclude that the proposed method is efficiency in assessing video conference quality.

  13. Reliability of ultrasound grading traditional score and new global OMERACT-EULAR score system (GLOESS): results from an inter- and intra-reading exercise by rheumatologists.

    PubMed

    Ventura-Ríos, Lucio; Hernández-Díaz, Cristina; Ferrusquia-Toríz, Diana; Cruz-Arenas, Esteban; Rodríguez-Henríquez, Pedro; Alvarez Del Castillo, Ana Laura; Campaña-Parra, Alfredo; Canul, Efrén; Guerrero Yeo, Gerardo; Mendoza-Ruiz, Juan Jorge; Pérez Cristóbal, Mario; Sicsik, Sandra; Silva Luna, Karina

    2017-12-01

    This study aims to test the reliability of ultrasound to graduate synovitis in static and video images, evaluating separately grayscale and power Doppler (PD), and combined. Thirteen trained rheumatologist ultrasonographers participated in two separate rounds reading 42 images, 15 static and 27 videos, of the 7-joint count [wrist, 2nd and 3rd metacarpophalangeal (MCP), 2nd and 3rd interphalangeal (IPP), 2nd and 5th metatarsophalangeal (MTP) joints]. The images were from six patients with rheumatoid arthritis, performed by one ultrasonographer. Synovitis definition was according to OMERACT. Scoring system in grayscale, PD separately, and combined (GLOESS-Global OMERACT-EULAR Score System) were reviewed before exercise. Reliability intra- and inter-reading was calculated with Cohen's kappa weighted, according to Landis and Koch. Kappa values for inter-reading were good to excellent. The minor kappa was for GLOESS in static images, and the highest was for the same scoring in videos (k 0.59 and 0.85, respectively). Excellent values were obtained for static PD in 5th MTP joint and for PD video in 2nd MTP joint. Results for GLOESS in general were good to moderate. Poor agreement was observed in 3rd MCP and 3rd IPP in all kinds of images. Intra-reading agreement were greater in grayscale and GLOESS in static images than in videos (k 0.86 vs. 0.77 and k 0.86 vs. 0.71, respectively), but PD was greater in videos than in static images (k 1.0 vs. 0.79). The reliability of the synovitis scoring through static images and videos is in general good to moderate when using grayscale and PD separately or combined.

  14. Evaluation of experimental UAV video change detection

    NASA Astrophysics Data System (ADS)

    Bartelsen, J.; Saur, G.; Teutsch, C.

    2016-10-01

    During the last ten years, the availability of images acquired from unmanned aerial vehicles (UAVs) has been continuously increasing due to the improvements and economic success of flight and sensor systems. From our point of view, reliable and automatic image-based change detection may contribute to overcoming several challenging problems in military reconnaissance, civil security, and disaster management. Changes within a scene can be caused by functional activities, i.e., footprints or skid marks, excavations, or humidity penetration; these might be recognizable in aerial images, but are almost overlooked when change detection is executed manually. With respect to the circumstances, these kinds of changes may be an indication of sabotage, terroristic activity, or threatening natural disasters. Although image-based change detection is possible from both ground and aerial perspectives, in this paper we primarily address the latter. We have applied an extended approach to change detection as described by Saur and Kruger,1 and Saur et al.2 and have built upon the ideas of Saur and Bartelsen.3 The commercial simulation environment Virtual Battle Space 3 (VBS3) is used to simulate aerial "before" and "after" image acquisition concerning flight path, weather conditions and objects within the scene and to obtain synthetic videos. Video frames, which depict the same part of the scene, including "before" and "after" changes and not necessarily from the same perspective, are registered pixel-wise against each other by a photogrammetric concept, which is based on a homography. The pixel-wise registration is used to apply an automatic difference analysis, which, to a limited extent, is able to suppress typical errors caused by imprecise frame registration, sensor noise, vegetation and especially parallax effects. The primary concern of this paper is to seriously evaluate the possibilities and limitations of our current approach for image-based change detection with respect to the flight path, viewpoint change and parametrization. Hence, based on synthetic "before" and "after" videos of a simulated scene, we estimated the precision and recall of automatically detected changes. In addition and based on our approach, we illustrate the results showing the change detection in short, but real video sequences. Future work will improve the photogrammetric approach for frame registration, and extensive real video material, capable of change detection, will be acquired.

  15. From image captioning to video summary using deep recurrent networks and unsupervised segmentation

    NASA Astrophysics Data System (ADS)

    Morosanu, Bogdan-Andrei; Lemnaru, Camelia

    2018-04-01

    Automatic captioning systems based on recurrent neural networks have been tremendously successful at providing realistic natural language captions for complex and varied image data. We explore methods for adapting existing models trained on large image caption data sets to a similar problem, that of summarising videos using natural language descriptions and frame selection. These architectures create internal high level representations of the input image that can be used to define probability distributions and distance metrics on these distributions. Specifically, we interpret each hidden unit inside a layer of the caption model as representing the un-normalised log probability of some unknown image feature of interest for the caption generation process. We can then apply well understood statistical divergence measures to express the difference between images and create an unsupervised segmentation of video frames, classifying consecutive images of low divergence as belonging to the same context, and those of high divergence as belonging to different contexts. To provide a final summary of the video, we provide a group of selected frames and a text description accompanying them, allowing a user to perform a quick exploration of large unlabeled video databases.

  16. Benefit from NASA

    NASA Image and Video Library

    1999-06-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  17. Remote driving with reduced bandwidth communication

    NASA Technical Reports Server (NTRS)

    Depiero, Frederick W.; Noell, Timothy E.; Gee, Timothy F.

    1993-01-01

    Oak Ridge National Laboratory has developed a real-time video transmission system for low bandwidth remote operations. The system supports both continuous transmission of video for remote driving and progressive transmission of still images. Inherent in the system design is a spatiotemporal limitation to the effects of channel errors. The average data rate of the system is 64,000 bits/s, a compression of approximately 1000:1 for the black and white National Television Standard Code video. The image quality of the transmissions is maintained at a level that supports teleoperation of a high mobility multipurpose wheeled vehicle at speeds up to 15 mph on a moguled dirt track. Video compression is achieved by using Laplacian image pyramids and a combination of classical techniques. Certain subbands of the image pyramid are transmitted by using interframe differencing with a periodic refresh to aid in bandwidth reduction. Images are also foveated to concentrate image detail in a steerable region. The system supports dynamic video quality adjustments between frame rate, image detail, and foveation rate. A typical configuration for the system used during driving has a frame rate of 4 Hz, a compression per frame of 125:1, and a resulting latency of less than 1s.

  18. Positive Association of Video Game Playing with Left Frontal Cortical Thickness in Adolescents

    PubMed Central

    Kühn, Simone; Lorenz, Robert; Banaschewski, Tobias; Barker, Gareth J.; Büchel, Christian; Conrod, Patricia J.; Flor, Herta; Garavan, Hugh; Ittermann, Bernd; Loth, Eva; Mann, Karl; Nees, Frauke; Artiges, Eric; Paus, Tomas; Rietschel, Marcella; Smolka, Michael N.; Ströhle, Andreas; Walaszek, Bernadetta; Schumann, Gunter; Heinz, Andreas; Gallinat, Jürgen

    2014-01-01

    Playing video games is a common recreational activity of adolescents. Recent research associated frequent video game playing with improvements in cognitive functions. Improvements in cognition have been related to grey matter changes in prefrontal cortex. However, a fine-grained analysis of human brain structure in relation to video gaming is lacking. In magnetic resonance imaging scans of 152 14-year old adolescents, FreeSurfer was used to estimate cortical thickness. Cortical thickness across the whole cortical surface was correlated with self-reported duration of video gaming (hours per week). A robust positive association between cortical thickness and video gaming duration was observed in left dorsolateral prefrontal cortex (DLPFC) and left frontal eye fields (FEFs). No regions showed cortical thinning in association with video gaming frequency. DLPFC is the core correlate of executive control and strategic planning which in turn are essential cognitive domains for successful video gaming. The FEFs are a key region involved in visuo-motor integration important for programming and execution of eye movements and allocation of visuo-spatial attention, processes engaged extensively in video games. The results may represent the biological basis of previously reported cognitive improvements due to video game play. Whether or not these results represent a-priori characteristics or consequences of video gaming should be studied in future longitudinal investigations. PMID:24633348

  19. Imaging fall Chinook salmon redds in the Columbia River with a dual-frequency identification sonar

    USGS Publications Warehouse

    Tiffan, K.F.; Rondorf, D.W.; Skalicky, J.J.

    2004-01-01

    We tested the efficacy of a dual-frequency identification sonar (DIDSON) for imaging and enumeration of fall Chinook salmon Oncorhynchus tshawytscha redds in a spawning area below Bonneville Dam on the Columbia River. The DIDSON uses sound to form near-video-quality images and has the advantages of imaging in zero-visibility water and possessing a greater detection range and field of view than underwater video cameras. We suspected that the large size and distinct morphology of a fall Chinook salmon redd would facilitate acoustic imaging if the DIDSON was towed near the river bottom so as to cast an acoustic shadow from the tailspill over the redd pocket. We tested this idea by observing 22 different redds with an underwater video camera, spatially referencing their locations, and then navigating to them while imaging them with the DIDSON. All 22 redds were successfully imaged with the DIDSON. We subsequently conducted redd searches along transects to compare the number of redds imaged by the DIDSON with the number observed using an underwater video camera. We counted 117 redds with the DIDSON and 81 redds with the underwater video camera. Only one of the redds observed with the underwater video camera was not also documented by the DIDSON. In spite of the DIDSON's high cost, it may serve as a useful tool for enumerating fall Chinook salmon redds in conditions that are not conducive to underwater videography.

  20. Reconstructing Interlaced High-Dynamic-Range Video Using Joint Learning.

    PubMed

    Inchang Choi; Seung-Hwan Baek; Kim, Min H

    2017-11-01

    For extending the dynamic range of video, it is a common practice to capture multiple frames sequentially with different exposures and combine them to extend the dynamic range of each video frame. However, this approach results in typical ghosting artifacts due to fast and complex motion in nature. As an alternative, video imaging with interlaced exposures has been introduced to extend the dynamic range. However, the interlaced approach has been hindered by jaggy artifacts and sensor noise, leading to concerns over image quality. In this paper, we propose a data-driven approach for jointly solving two specific problems of deinterlacing and denoising that arise in interlaced video imaging with different exposures. First, we solve the deinterlacing problem using joint dictionary learning via sparse coding. Since partial information of detail in differently exposed rows is often available via interlacing, we make use of the information to reconstruct details of the extended dynamic range from the interlaced video input. Second, we jointly solve the denoising problem by tailoring sparse coding to better handle additive noise in low-/high-exposure rows, and also adopt multiscale homography flow to temporal sequences for denoising. We anticipate that the proposed method will allow for concurrent capture of higher dynamic range video frames without suffering from ghosting artifacts. We demonstrate the advantages of our interlaced video imaging compared with the state-of-the-art high-dynamic-range video methods.

  1. Positive effect on patient experience of video information given prior to cardiovascular magnetic resonance imaging: A clinical trial.

    PubMed

    Ahlander, Britt-Marie; Engvall, Jan; Maret, Eva; Ericsson, Elisabeth

    2018-03-01

    To evaluate the effect of video information given before cardiovascular magnetic resonance imaging on patient anxiety and to compare patient experiences of cardiovascular magnetic resonance imaging versus myocardial perfusion scintigraphy. To evaluate whether additional information has an impact on motion artefacts. Cardiovascular magnetic resonance imaging and myocardial perfusion scintigraphy are technically advanced methods for the evaluation of heart diseases. Although cardiovascular magnetic resonance imaging is considered to be painless, patients may experience anxiety due to the closed environment. A prospective randomised intervention study, not registered. The sample (n = 148) consisted of 97 patients referred for cardiovascular magnetic resonance imaging, randomised to receive either video information in addition to standard text-information (CMR-video/n = 49) or standard text-information alone (CMR-standard/n = 48). A third group undergoing myocardial perfusion scintigraphy (n = 51) was compared with the cardiovascular magnetic resonance imaging-standard group. Anxiety was evaluated before, immediately after the procedure and 1 week later. Five questionnaires were used: Cardiac Anxiety Questionnaire, State-Trait Anxiety Inventory, Hospital Anxiety and Depression scale, MRI Fear Survey Schedule and the MRI-Anxiety Questionnaire. Motion artefacts were evaluated by three observers, blinded to the information given. Data were collected between April 2015-April 2016. The study followed the CONSORT guidelines. The CMR-video group scored lower (better) than the cardiovascular magnetic resonance imaging-standard group in the factor Relaxation (p = .039) but not in the factor Anxiety. Anxiety levels were lower during scintigraphic examinations compared to the CMR-standard group (p < .001). No difference was found regarding motion artefacts between CMR-video and CMR-standard. Patient ability to relax during cardiovascular magnetic resonance imaging increased by adding video information prior the exam, which is important in relation to perceived quality in nursing. No effect was seen on motion artefacts. Video information prior to examinations can be an easy and time effective method to help patients cooperate in imaging procedures. © 2017 John Wiley & Sons Ltd.

  2. Video System Highlights Hydrogen Fires

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Gleman, Stuart M.; Moerk, John S.

    1992-01-01

    Video system combines images from visible spectrum and from three bands in infrared spectrum to produce color-coded display in which hydrogen fires distinguished from other sources of heat. Includes linear array of 64 discrete lead selenide mid-infrared detectors operating at room temperature. Images overlaid on black and white image of same scene from standard commercial video camera. In final image, hydrogen fires appear red; carbon-based fires, blue; and other hot objects, mainly green and combinations of green and red. Where no thermal source present, image remains in black and white. System enables high degree of discrimination between hydrogen flames and other thermal emitters.

  3. Development of laryngeal video stroboscope with laser marking module for dynamic glottis measurement.

    PubMed

    Kuo, Chung-Feng Jeffrey; Wang, Hsing-Won; Hsiao, Shang-Wun; Peng, Kai-Ching; Chou, Ying-Liang; Lai, Chun-Yu; Hsu, Chien-Tung Max

    2014-01-01

    Physicians clinically use laryngeal video stroboscope as an auxiliary instrument to test glottal diseases, and read vocal fold images and voice quality for diagnosis. As the position of vocal fold varies in each person, the proportion of the vocal fold size as presented in the vocal fold image is different, making it impossible to directly estimate relevant glottis physiological parameters, such as the length, area, perimeter, and opening angle of the glottis. Hence, this study designs an innovative laser projection marking module for the laryngeal video stroboscope to provide reference parameters for image scaling conversion. This innovative laser projection marking module to be installed on the laryngeal video stroboscope using laser beams to project onto the glottis plane, in order to provide reference parameters for scaling conversion of images of laryngeal video stroboscope. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Analysis of video-recorded images to determine linear and angular dimensions in the growing horse.

    PubMed

    Hunt, W F; Thomas, V G; Stiefel, W

    1999-09-01

    Studies of growth and conformation require statistical methods that are not applicable to subjective conformation standards used by breeders and trainers. A new system was developed to provide an objective approach for both science and industry, based on analysis of video images to measure aspects of conformation that were represented by angles or lengths. A studio crush was developed in which video images of horses of different sizes were taken after bone protuberances, located by palpation, were marked with white paper stickers. Screen pixel coordinates of calibration marks, bone markers and points on horse outlines were digitised from captured images and corrected for aspect ratio and 'fish-eye' lens effects. Calculations from the corrected coordinates produced linear dimensions and angular dimensions useful for comparison of horses for conformation and experimental purposes. The precision achieved by the method in determining linear and angular dimensions was examined through systematically determining variance for isolated steps of the procedure. Angles of the front limbs viewed from in front were determined with a standard deviation of 2-5 degrees and effects of viewing angle were detectable statistically. The height of the rump and wither were determined with precision closely related to the limitations encountered in locating a point on a screen, which was greater for markers applied to the skin than for points at the edge of the image. Parameters determined from markers applied to the skin were, however, more variable (because their relation to bone position was affected by movement), but still provided a means by which a number of aspects of size and conformation can be determined objectively for many horses during growth. Sufficient precision was achieved to detect statistically relatively small effects on calculated parameters of camera height position.

  5. Exemplar-Based Image and Video Stylization Using Fully Convolutional Semantic Features.

    PubMed

    Zhu, Feida; Yan, Zhicheng; Bu, Jiajun; Yu, Yizhou

    2017-05-10

    Color and tone stylization in images and videos strives to enhance unique themes with artistic color and tone adjustments. It has a broad range of applications from professional image postprocessing to photo sharing over social networks. Mainstream photo enhancement softwares, such as Adobe Lightroom and Instagram, provide users with predefined styles, which are often hand-crafted through a trial-and-error process. Such photo adjustment tools lack a semantic understanding of image contents and the resulting global color transform limits the range of artistic styles it can represent. On the other hand, stylistic enhancement needs to apply distinct adjustments to various semantic regions. Such an ability enables a broader range of visual styles. In this paper, we first propose a novel deep learning architecture for exemplar-based image stylization, which learns local enhancement styles from image pairs. Our deep learning architecture consists of fully convolutional networks (FCNs) for automatic semantics-aware feature extraction and fully connected neural layers for adjustment prediction. Image stylization can be efficiently accomplished with a single forward pass through our deep network. To extend our deep network from image stylization to video stylization, we exploit temporal superpixels (TSPs) to facilitate the transfer of artistic styles from image exemplars to videos. Experiments on a number of datasets for image stylization as well as a diverse set of video clips demonstrate the effectiveness of our deep learning architecture.

  6. PAVECHECK : integrating deflection and GPR for network condition surveys.

    DOT National Transportation Integrated Search

    2009-01-01

    The PAVECHECK data integration and analysis system was developed to merge Falling Weight : Deflectometer (FWD) and Ground Penetrating Radar (GPR) data together with digital video images of : surface conditions. In this study Global Positioning System...

  7. A cross-species socio-emotional behaviour development revealed by a multivariate analysis.

    PubMed

    Koshiba, Mamiko; Senoo, Aya; Mimura, Koki; Shirakawa, Yuka; Karino, Genta; Obara, Saya; Ozawa, Shinpei; Sekihara, Hitomi; Fukushima, Yuta; Ueda, Toyotoshi; Kishino, Hirohisa; Tanaka, Toshihisa; Ishibashi, Hidetoshi; Yamanouchi, Hideo; Yui, Kunio; Nakamura, Shun

    2013-01-01

    Recent progress in affective neuroscience and social neurobiology has been propelled by neuro-imaging technology and epigenetic approach in neurobiology of animal behaviour. However, quantitative measurements of socio-emotional development remains lacking, though sensory-motor development has been extensively studied in terms of digitised imaging analysis. Here, we developed a method for socio-emotional behaviour measurement that is based on the video recordings under well-defined social context using animal models with variously social sensory interaction during development. The behaviour features digitized from the video recordings were visualised in a multivariate statistic space using principal component analysis. The clustering of the behaviour parameters suggested the existence of species- and stage-specific as well as cross-species behaviour modules. These modules were used to characterise the behaviour of children with or without autism spectrum disorders (ASDs). We found that socio-emotional behaviour is highly dependent on social context and the cross-species behaviour modules may predict neurobiological basis of ASDs.

  8. Video Analytics for Indexing, Summarization and Searching of Video Archives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trease, Harold E.; Trease, Lynn L.

    This paper will be submitted to the proceedings The Eleventh IASTED International Conference on. Signal and Image Processing. Given a video or video archive how does one effectively and quickly summarize, classify, and search the information contained within the data? This paper addresses these issues by describing a process for the automated generation of a table-of-contents and keyword, topic-based index tables that can be used to catalogue, summarize, and search large amounts of video data. Having the ability to index and search the information contained within the videos, beyond just metadata tags, provides a mechanism to extract and identify "useful"more » content from image and video data.« less

  9. A quality assessment of 3D video analysis for full scale rockfall experiments

    NASA Astrophysics Data System (ADS)

    Volkwein, A.; Glover, J.; Bourrier, F.; Gerber, W.

    2012-04-01

    Main goal of full scale rockfall experiments is to retrieve a 3D trajectory of a boulder along the slope. Such trajectories then can be used to calibrate rockfall simulation models. This contribution presents the application of video analysis techniques capturing rock fall velocity of some free fall full scale rockfall experiments along a rock face with an inclination of about 50 degrees. Different scaling methodologies have been evaluated. They mainly differ in the way the scaling factors between the movie frames and the reality and are determined. For this purpose some scale bars and targets with known dimensions have been distributed in advance along the slope. The single scaling approaches are briefly described as follows: (i) Image raster is scaled to the distant fixed scale bar then recalibrated to the plane of the passing rock boulder by taking the measured position of the nearest impact as the distance to the camera. The distance between the camera, scale bar, and passing boulder are surveyed. (ii) The image raster was scaled using the four nearest targets (identified using frontal video) from the trajectory to be analyzed. The average of the scaling factors was finally taken as scaling factor. (iii) The image raster was scaled using the four nearest targets from the trajectory to be analyzed. The scaling factor for one trajectory was calculated by balancing the mean scaling factors associated with the two nearest and the two farthest targets in relation to their mean distance to the analyzed trajectory. (iv) Same as previous method but with varying scaling factors during along the trajectory. It has shown that a direct measure of the scaling target and nearest impact zone is the most accurate. If constant plane is assumed it doesn't account for the lateral deviations of the rock boulder from the fall line consequently adding error into the analysis. Thus a combination of scaling methods (i) and (iv) are considered to give the best results. For best results regarding the lateral rough positioning along the slope, the frontal video must also be scaled. The error in scaling the video images can be evaluated by comparing the data by additional combination of the vertical trajectory component over time with the theoretical polynomial trend according to gravity. The different tracking techniques used to plot the position of the boulder's center of gravity all generated positional data with minimal error acceptable for trajectory analysis. However, when calculating instantaneous velocities an amplification of this error becomes un acceptable. A regression analysis of the data is helpful to optimize trajectory and velocity, respectively.

  10. MPEG-4 ASP SoC receiver with novel image enhancement techniques for DAB networks

    NASA Astrophysics Data System (ADS)

    Barreto, D.; Quintana, A.; García, L.; Callicó, G. M.; Núñez, A.

    2007-05-01

    This paper presents a system for real-time video reception in low-power mobile devices using Digital Audio Broadcast (DAB) technology for transmission. A demo receiver terminal is designed into a FPGA platform using the Advanced Simple Profile (ASP) MPEG-4 standard for video decoding. In order to keep the demanding DAB requirements, the bandwidth of the encoded sequence must be drastically reduced. In this sense, prior to the MPEG-4 coding stage, a pre-processing stage is performed. It is firstly composed by a segmentation phase according to motion and texture based on the Principal Component Analysis (PCA) of the input video sequence, and secondly by a down-sampling phase, which depends on the segmentation results. As a result of the segmentation task, a set of texture and motion maps are obtained. These motion and texture maps are also included into the bit-stream as user data side-information and are therefore known to the receiver. For all bit-rates, the whole encoder/decoder system proposed in this paper exhibits higher image visual quality than the alternative encoding/decoding method, assuming equal image sizes. A complete analysis of both techniques has also been performed to provide the optimum motion and texture maps for the global system, which has been finally validated for a variety of video sequences. Additionally, an optimal HW/SW partition for the MPEG-4 decoder has been studied and implemented over a Programmable Logic Device with an embedded ARM9 processor. Simulation results show that a throughput of 15 QCIF frames per second can be achieved with low area and low power implementation.

  11. Artificial Intelligence Techniques for Automatic Screening of Amblyogenic Factors

    PubMed Central

    Van Eenwyk, Jonathan; Agah, Arvin; Giangiacomo, Joseph; Cibis, Gerhard

    2008-01-01

    Purpose To develop a low-cost automated video system to effectively screen children aged 6 months to 6 years for amblyogenic factors. Methods In 1994 one of the authors (G.C.) described video vision development assessment, a digitizable analog video-based system combining Brückner pupil red reflex imaging and eccentric photorefraction to screen young children for amblyogenic factors. The images were analyzed manually with this system. We automated the capture of digital video frames and pupil images and applied computer vision and artificial intelligence to analyze and interpret results. The artificial intelligence systems were evaluated by a tenfold testing method. Results The best system was the decision tree learning approach, which had an accuracy of 77%, compared to the “gold standard” specialist examination with a “refer/do not refer” decision. Criteria for referral were strabismus, including microtropia, and refractive errors and anisometropia considered to be amblyogenic. Eighty-two percent of strabismic individuals were correctly identified. High refractive errors were also correctly identified and referred 90% of the time, as well as significant anisometropia. The program was less correct in identifying more moderate refractive errors, below +5 and less than −7. Conclusions Although we are pursuing a variety of avenues to improve the accuracy of the automated analysis, the program in its present form provides acceptable cost benefits for detecting ambylogenic factors in children aged 6 months to 6 years. PMID:19277222

  12. Artificial intelligence techniques for automatic screening of amblyogenic factors.

    PubMed

    Van Eenwyk, Jonathan; Agah, Arvin; Giangiacomo, Joseph; Cibis, Gerhard

    2008-01-01

    To develop a low-cost automated video system to effectively screen children aged 6 months to 6 years for amblyogenic factors. In 1994 one of the authors (G.C.) described video vision development assessment, a digitizable analog video-based system combining Brückner pupil red reflex imaging and eccentric photorefraction to screen young children for amblyogenic factors. The images were analyzed manually with this system. We automated the capture of digital video frames and pupil images and applied computer vision and artificial intelligence to analyze and interpret results. The artificial intelligence systems were evaluated by a tenfold testing method. The best system was the decision tree learning approach, which had an accuracy of 77%, compared to the "gold standard" specialist examination with a "refer/do not refer" decision. Criteria for referral were strabismus, including microtropia, and refractive errors and anisometropia considered to be amblyogenic. Eighty-two percent of strabismic individuals were correctly identified. High refractive errors were also correctly identified and referred 90% of the time, as well as significant anisometropia. The program was less correct in identifying more moderate refractive errors, below +5 and less than -7. Although we are pursuing a variety of avenues to improve the accuracy of the automated analysis, the program in its present form provides acceptable cost benefits for detecting ambylogenic factors in children aged 6 months to 6 years.

  13. Optimizing a neural network for detection of moving vehicles in video

    NASA Astrophysics Data System (ADS)

    Fischer, Noëlle M.; Kruithof, Maarten C.; Bouma, Henri

    2017-10-01

    In the field of security and defense, it is extremely important to reliably detect moving objects, such as cars, ships, drones and missiles. Detection and analysis of moving objects in cameras near borders could be helpful to reduce illicit trading, drug trafficking, irregular border crossing, trafficking in human beings and smuggling. Many recent benchmarks have shown that convolutional neural networks are performing well in the detection of objects in images. Most deep-learning research effort focuses on classification or detection on single images. However, the detection of dynamic changes (e.g., moving objects, actions and events) in streaming video is extremely relevant for surveillance and forensic applications. In this paper, we combine an end-to-end feedforward neural network for static detection with a recurrent Long Short-Term Memory (LSTM) network for multi-frame analysis. We present a practical guide with special attention to the selection of the optimizer and batch size. The end-to-end network is able to localize and recognize the vehicles in video from traffic cameras. We show an efficient way to collect relevant in-domain data for training with minimal manual labor. Our results show that the combination with LSTM improves performance for the detection of moving vehicles.

  14. Extracting 3d Semantic Information from Video Surveillance System Using Deep Learning

    NASA Astrophysics Data System (ADS)

    Zhang, J. S.; Cao, J.; Mao, B.; Shen, D. Q.

    2018-04-01

    At present, intelligent video analysis technology has been widely used in various fields. Object tracking is one of the important part of intelligent video surveillance, but the traditional target tracking technology based on the pixel coordinate system in images still exists some unavoidable problems. Target tracking based on pixel can't reflect the real position information of targets, and it is difficult to track objects across scenes. Based on the analysis of Zhengyou Zhang's camera calibration method, this paper presents a method of target tracking based on the target's space coordinate system after converting the 2-D coordinate of the target into 3-D coordinate. It can be seen from the experimental results: Our method can restore the real position change information of targets well, and can also accurately get the trajectory of the target in space.

  15. Retinal slit lamp video mosaicking.

    PubMed

    De Zanet, Sandro; Rudolph, Tobias; Richa, Rogerio; Tappeiner, Christoph; Sznitman, Raphael

    2016-06-01

    To this day, the slit lamp remains the first tool used by an ophthalmologist to examine patient eyes. Imaging of the retina poses, however, a variety of problems, namely a shallow depth of focus, reflections from the optical system, a small field of view and non-uniform illumination. For ophthalmologists, the use of slit lamp images for documentation and analysis purposes, however, remains extremely challenging due to large image artifacts. For this reason, we propose an automatic retinal slit lamp video mosaicking, which enlarges the field of view and reduces amount of noise and reflections, thus enhancing image quality. Our method is composed of three parts: (i) viable content segmentation, (ii) global registration and (iii) image blending. Frame content is segmented using gradient boosting with custom pixel-wise features. Speeded-up robust features are used for finding pair-wise translations between frames with robust random sample consensus estimation and graph-based simultaneous localization and mapping for global bundle adjustment. Foreground-aware blending based on feathering merges video frames into comprehensive mosaics. Foreground is segmented successfully with an area under the curve of the receiver operating characteristic curve of 0.9557. Mosaicking results and state-of-the-art methods were compared and rated by ophthalmologists showing a strong preference for a large field of view provided by our method. The proposed method for global registration of retinal slit lamp images of the retina into comprehensive mosaics improves over state-of-the-art methods and is preferred qualitatively.

  16. Droplet morphometry and velocimetry (DMV): a video processing software for time-resolved, label-free tracking of droplet parameters.

    PubMed

    Basu, Amar S

    2013-05-21

    Emerging assays in droplet microfluidics require the measurement of parameters such as drop size, velocity, trajectory, shape deformation, fluorescence intensity, and others. While micro particle image velocimetry (μPIV) and related techniques are suitable for measuring flow using tracer particles, no tool exists for tracking droplets at the granularity of a single entity. This paper presents droplet morphometry and velocimetry (DMV), a digital video processing software for time-resolved droplet analysis. Droplets are identified through a series of image processing steps which operate on transparent, translucent, fluorescent, or opaque droplets. The steps include background image generation, background subtraction, edge detection, small object removal, morphological close and fill, and shape discrimination. A frame correlation step then links droplets spanning multiple frames via a nearest neighbor search with user-defined matching criteria. Each step can be individually tuned for maximum compatibility. For each droplet found, DMV provides a time-history of 20 different parameters, including trajectory, velocity, area, dimensions, shape deformation, orientation, nearest neighbour spacing, and pixel statistics. The data can be reported via scatter plots, histograms, and tables at the granularity of individual droplets or by statistics accrued over the population. We present several case studies from industry and academic labs, including the measurement of 1) size distributions and flow perturbations in a drop generator, 2) size distributions and mixing rates in drop splitting/merging devices, 3) efficiency of single cell encapsulation devices, 4) position tracking in electrowetting operations, 5) chemical concentrations in a serial drop dilutor, 6) drop sorting efficiency of a tensiophoresis device, 7) plug length and orientation of nonspherical plugs in a serpentine channel, and 8) high throughput tracking of >250 drops in a reinjection system. Performance metrics show that highest accuracy and precision is obtained when the video resolution is >300 pixels per drop. Analysis time increases proportionally with video resolution. The current version of the software provides throughputs of 2-30 fps, suggesting the potential for real time analysis.

  17. Different source image fusion based on FPGA

    NASA Astrophysics Data System (ADS)

    Luo, Xiao; Piao, Yan

    2016-03-01

    The fusion technology of video image is to make the video obtained by different image sensors complementary to each other by some technical means, so as to obtain the video information which is rich in information and suitable for the human eye system. Infrared cameras in harsh environments such as when smoke, fog and low light situations penetrating power, but the ability to obtain the details of the image is poor, does not meet the human visual system. Single visible light imaging can be rich in detail, high resolution images and for the visual system, but the visible image easily affected by the external environment. Infrared image and visible image fusion process involved in the video image fusion algorithm complexity and high calculation capacity, have occupied more memory resources, high clock rate requirements, such as software, c ++, c, etc. to achieve more, but based on Hardware platform less. In this paper, based on the imaging characteristics of infrared images and visible light images, the software and hardware are combined to obtain the registration parameters through software matlab, and the gray level weighted average method is used to implement the hardware platform. Information fusion, and finally the fusion image can achieve the goal of effectively improving the acquisition of information to increase the amount of information in the image.

  18. Optimized static and video EEG rapid serial visual presentation (RSVP) paradigm based on motion surprise computation

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Huber, David J.; Bhattacharyya, Rajan

    2017-05-01

    In this paper, we describe an algorithm and system for optimizing search and detection performance for "items of interest" (IOI) in large-sized images and videos that employ the Rapid Serial Visual Presentation (RSVP) based EEG paradigm and surprise algorithms that incorporate motion processing to determine whether static or video RSVP is used. The system works by first computing a motion surprise map on image sub-regions (chips) of incoming sensor video data and then uses those surprise maps to label the chips as either "static" or "moving". This information tells the system whether to use a static or video RSVP presentation and decoding algorithm in order to optimize EEG based detection of IOI in each chip. Using this method, we are able to demonstrate classification of a series of image regions from video with an azimuth value of 1, indicating perfect classification, over a range of display frequencies and video speeds.

  19. Deep Learning Localizes and Identifies Polyps in Real Time with 96% Accuracy in Screening Colonoscopy.

    PubMed

    Urban, Gregor; Tripathi, Priyam; Alkayali, Talal; Mittal, Mohit; Jalali, Farid; Karnes, William; Baldi, Pierre

    2018-06-18

    The benefit of colonoscopy for colorectal cancer prevention depends on the adenoma detection rate (ADR). The ADR should reflect adenoma prevalence rate, estimated to be greater than 50% among the screening-age population. Yet the rate of adenoma detection by colonoscopists varies from 7% to 53%. It is estimated that every 1% increase in ADR reduces the risk of interval colorectal cancers by 3-6%. New strategies are needed to increase the ADR during colonoscopy. We tested the ability of computer-assisted image analysis, with convolutional neural networks (a deep learning model for image analysis), to improve polyp detection, a surrogate of ADR. We designed and trained deep convolutional neural networks (CNN) to detect polyps using a diverse and representative set of 8641 hand labeled images from screening colonoscopies collected from over 2000 patients. We tested the models on 20 colonoscopy videos with a total duration of 5 hours. Expert colonoscopists were asked to identify all polyps in 9 de-identified colonoscopy videos, selected from archived video studies, either with or without benefit of the CNN overlay. Their findings were compared with those of the CNN, using CNN-assisted expert review as the reference. When tested on manually labeled images, the CNN identified polyps with an area under the receiver operating characteristic curve (ROC-AUC) of 0.991 and an accuracy of 96.4%. In the analysis of colonoscopy videos in which 28 polyps were removed, 4 expert reviewers identified 8 additional polyps without CNN assistance that had not been removed and identified an additional 17 polyps with CNN assistance (45 in total). All polyps removed and identified by expert review were detected by the CNN. The CNN had a false-positive rate of 7%. In a set of 8641 colonoscopy images containing 4088 unique polyps the CNN identified polyps with a cross-validation accuracy of 96.4% and ROC-AUC value of 0.991. The CNN system can detect and localize polyps well within real-time constraints using an ordinary desktop machine with a contemporary graphics processing unit. This system could increase ADR and reduce interval colorectal cancers but requires validation in large multicenter trials. Copyright © 2018 AGA Institute. Published by Elsevier Inc. All rights reserved.

  20. Pyroclast Tracking Velocimetry: A particle tracking velocimetry-based tool for the study of Strombolian explosive eruptions

    NASA Astrophysics Data System (ADS)

    Gaudin, Damien; Moroni, Monica; Taddeucci, Jacopo; Scarlato, Piergiorgio; Shindler, Luca

    2014-07-01

    Image-based techniques enable high-resolution observation of the pyroclasts ejected during Strombolian explosions and drawing inferences on the dynamics of volcanic activity. However, data extraction from high-resolution videos is time consuming and operator dependent, while automatic analysis is often challenging due to the highly variable quality of images collected in the field. Here we present a new set of algorithms to automatically analyze image sequences of explosive eruptions: the pyroclast tracking velocimetry (PyTV) toolbox. First, a significant preprocessing is used to remove the image background and to detect the pyroclasts. Then, pyroclast tracking is achieved with a new particle tracking velocimetry algorithm, featuring an original predictor of velocity based on the optical flow equation. Finally, postprocessing corrects the systematic errors of measurements. Four high-speed videos of Strombolian explosions from Yasur and Stromboli volcanoes, representing various observation conditions, have been used to test the efficiency of the PyTV against manual analysis. In all cases, >106 pyroclasts have been successfully detected and tracked by PyTV, with a precision of 1 m/s for the velocity and 20% for the size of the pyroclast. On each video, more than 1000 tracks are several meters long, enabling us to study pyroclast properties and trajectories. Compared to manual tracking, 3 to 100 times more pyroclasts are analyzed. PyTV, by providing time-constrained information, links physical properties and motion of individual pyroclasts. It is a powerful tool for the study of explosive volcanic activity, as well as an ideal complement for other geological and geophysical volcano observation systems.

  1. Video Skimming and Characterization through the Combination of Image and Language Understanding Techniques

    NASA Technical Reports Server (NTRS)

    Smith, Michael A.; Kanade, Takeo

    1997-01-01

    Digital video is rapidly becoming important for education, entertainment, and a host of multimedia applications. With the size of the video collections growing to thousands of hours, technology is needed to effectively browse segments in a short time without losing the content of the video. We propose a method to extract the significant audio and video information and create a "skim" video which represents a very short synopsis of the original. The goal of this work is to show the utility of integrating language and image understanding techniques for video skimming by extraction of significant information, such as specific objects, audio keywords and relevant video structure. The resulting skim video is much shorter, where compaction is as high as 20:1, and yet retains the essential content of the original segment.

  2. Detection of low-amplitude in vivo intrinsic signals from an optical imager of retinal function

    NASA Astrophysics Data System (ADS)

    Barriga, Eduardo S.; T'so, Dan; Pattichis, Marios; Kwon, Young; Kardon, Randy; Abramoff, Michael; Soliz, Peter

    2006-02-01

    In the early stages of some retinal diseases, such as glaucoma, loss of retinal activity may be difficult to detect with today's clinical instruments. Many of today's instruments focus on detecting changes in anatomical structures, such as the nerve fiber layer. Our device, which is based on a modified fundus camera, seeks to detect changes in optical signals that reflect functional changes in the retina. The functional imager uses a patterned stimulus at wavelength of 535nm. An intrinsic functional signal is collected at a near infrared wavelength. Measured changes in reflectance in response to the visual stimulus are on the order of 0.1% to 1% of the total reflected intensity level, which makes the functional signal difficult to detect by standard methods because it is masked by other physiological signals and by imaging system noise. In this paper, we analyze the video sequences from a set of 60 experiments with different patterned stimuli from cats. Using a set of statistical techniques known as Independent Component Analysis (ICA), we estimate the signals present in the videos. Through controlled simulation experiments, we quantify the limits of signal strength in order to detect the physiological signal of interest. The results of the analysis show that, in principle, signal levels of 0.1% (-30dB) can be detected. The study found that in 86% of the animal experiments the patterned stimuli effects on the retina can be detected and extracted. The analysis of the different responses extracted from the videos can give an insight of the functional processes present during the stimulation of the retina.

  3. Multi-modal highlight generation for sports videos using an information-theoretic excitability measure

    NASA Astrophysics Data System (ADS)

    Hasan, Taufiq; Bořil, Hynek; Sangwan, Abhijeet; L Hansen, John H.

    2013-12-01

    The ability to detect and organize `hot spots' representing areas of excitement within video streams is a challenging research problem when techniques rely exclusively on video content. A generic method for sports video highlight selection is presented in this study which leverages both video/image structure as well as audio/speech properties. Processing begins where the video is partitioned into small segments and several multi-modal features are extracted from each segment. Excitability is computed based on the likelihood of the segmental features residing in certain regions of their joint probability density function space which are considered both exciting and rare. The proposed measure is used to rank order the partitioned segments to compress the overall video sequence and produce a contiguous set of highlights. Experiments are performed on baseball videos based on signal processing advancements for excitement assessment in the commentators' speech, audio energy, slow motion replay, scene cut density, and motion activity as features. Detailed analysis on correlation between user excitability and various speech production parameters is conducted and an effective scheme is designed to estimate the excitement level of commentator's speech from the sports videos. Subjective evaluation of excitability and ranking of video segments demonstrate a higher correlation with the proposed measure compared to well-established techniques indicating the effectiveness of the overall approach.

  4. Performance evaluation of a two detector camera for real-time video.

    PubMed

    Lochocki, Benjamin; Gambín-Regadera, Adrián; Artal, Pablo

    2016-12-20

    Single pixel imaging can be the preferred method over traditional 2D-array imaging in spectral ranges where conventional cameras are not available. However, when it comes to real-time video imaging, single pixel imaging cannot compete with the framerates of conventional cameras, especially when high-resolution images are desired. Here we evaluate the performance of an imaging approach using two detectors simultaneously. First, we present theoretical results on how low SNR affects final image quality followed by experimentally determined results. Obtained video framerates were doubled compared to state of the art systems, resulting in a framerate from 22 Hz for a 32×32 resolution to 0.75 Hz for a 128×128 resolution image. Additionally, the two detector imaging technique enables the acquisition of images with a resolution of 256×256 in less than 3 s.

  5. Advanced IR System For Supersonic Boundary Layer Transition Flight Experiment

    NASA Technical Reports Server (NTRS)

    Banks, Daniel W.

    2008-01-01

    Infrared thermography is a preferred method investigating transition in flight: a) Global and non-intrusive; b) Can also be used to visualize and characterize other fluid mechanic phenomena such as shock impingement, separation etc. F-15 based system was updated with new camera and digital video recorder to support high Reynolds number transition tests. Digital Recording improves image quality and analysis capability and allows for accurate quantitative (temperature) measurements and greater enhancement through image processing allows analysis of smaller scale phenomena.

  6. Expert-novice differences in brain function of field hockey players.

    PubMed

    Wimshurst, Z L; Sowden, P T; Wright, M

    2016-02-19

    The aims of this study were to use functional magnetic resonance imaging to examine the neural bases for perceptual-cognitive superiority in a hockey anticipation task. Thirty participants (15 hockey players, 15 non-hockey players) lay in an MRI scanner while performing a video-based task in which they predicted the direction of an oncoming shot in either a hockey or a badminton scenario. Video clips were temporally occluded either 160 ms before the shot was made or 60 ms after the ball/shuttle left the stick/racquet. Behavioral data showed a significant hockey expertise×video-type interaction in which hockey experts were superior to novices with hockey clips but there were no significant differences with badminton clips. The imaging data on the other hand showed a significant main effect of hockey expertise and of video type (hockey vs. badminton), but the expertise×video-type interaction did not survive either a whole-brain or a small-volume correction for multiple comparisons. Further analysis of the expertise main effect revealed that when watching hockey clips, experts showed greater activation in the rostral inferior parietal lobule, which has been associated with an action observation network, and greater activation than novices in Brodmann areas 17 and 18 and middle frontal gyrus when watching badminton videos. The results provide partial support both for domain-specific and domain-general expertise effects in an action anticipation task. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2012-01-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  8. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2011-12-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  9. Eyelid contour detection and tracking for startle research related eye-blink measurements from high-speed video records.

    PubMed

    Bernard, Florian; Deuter, Christian Eric; Gemmar, Peter; Schachinger, Hartmut

    2013-10-01

    Using the positions of the eyelids is an effective and contact-free way for the measurement of startle induced eye-blinks, which plays an important role in human psychophysiological research. To the best of our knowledge, no methods for an efficient detection and tracking of the exact eyelid contours in image sequences captured at high-speed exist that are conveniently usable by psychophysiological researchers. In this publication a semi-automatic model-based eyelid contour detection and tracking algorithm for the analysis of high-speed video recordings from an eye tracker is presented. As a large number of images have been acquired prior to method development it was important that our technique is able to deal with images that are recorded without any special parametrisation of the eye tracker. The method entails pupil detection, specular reflection removal and makes use of dynamic model adaption. In a proof-of-concept study we could achieve a correct detection rate of 90.6%. With this approach, we provide a feasible method to accurately assess eye-blinks from high-speed video recordings. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  10. Meteor44 Video Meteor Photometry

    NASA Technical Reports Server (NTRS)

    Swift, Wesley R.; Suggs, Robert M.; Cooke, William J.

    2004-01-01

    Meteor44 is a software system developed at MSFC for the calibration and analysis of video meteor data. The dynamic range of the (8bit) video data is extended by approximately 4 magnitudes for both meteors and stellar images using saturation compensation. Camera and lens specific saturation compensation coefficients are derived from artificial variable star laboratory measurements. Saturation compensation significantly increases the number of meteors with measured intensity and improves the estimation of meteoroid mass distribution. Astrometry is automated to determine each image s plate coefficient using appropriate star catalogs. The images are simultaneously intensity calibrated from the contained stars to determine the photon sensitivity and the saturation level referenced above the atmosphere. The camera s spectral response is used to compensate for stellar color index and typical meteor spectra in order to report meteor light curves in traditional visual magnitude units. Recent efforts include improved camera calibration procedures, long focal length "streak" meteor photome&y and two-station track determination. Meteor44 has been used to analyze data from the 2001.2002 and 2003 MSFC Leonid observational campaigns as well as several lesser showers. The software is interactive and can be demonstrated using data from recent Leonid campaigns.

  11. Accessibility of online self-management support websites for people with osteoarthritis: A text content analysis.

    PubMed

    Chapman, Lara; Brooks, Charlotte; Lawson, Jem; Russell, Cynthia; Adams, Jo

    2017-01-01

    Objectives This study assessed accessibility of online self-management support webpages for people with osteoarthritis by considering readability of text and inclusion of images and videos. Methods Eight key search terms developed and agreed with patient and public involvement representatives were entered into the Google search engine. Webpages from the first page of Google search results were identified. Readability of webpage text was assessed using two standardised readability indexes, and the number of images and videos included on each webpage was recorded. Results Forty-nine webpages met the inclusion criteria and were assessed. Only five of the webpages met the recommended reading level for health education literature. Almost half (44.9%) of webpages did not include any informative images to support written information. A minority of the webpages (6.12%) included relevant videos. Discussion Information provided on health webpages aiming to support patients to self-manage osteoarthritis may not be read, understood or used effectively by many people accessing it. Recommendations include using accessible language in health information, supplementing written information with visual resources and reviewing content and readability in collaboration with patient and public involvement representatives.

  12. Common and Innovative Visuals: A sparsity modeling framework for video.

    PubMed

    Abdolhosseini Moghadam, Abdolreza; Kumar, Mrityunjay; Radha, Hayder

    2014-05-02

    Efficient video representation models are critical for many video analysis and processing tasks. In this paper, we present a framework based on the concept of finding the sparsest solution to model video frames. To model the spatio-temporal information, frames from one scene are decomposed into two components: (i) a common frame, which describes the visual information common to all the frames in the scene/segment, and (ii) a set of innovative frames, which depicts the dynamic behaviour of the scene. The proposed approach exploits and builds on recent results in the field of compressed sensing to jointly estimate the common frame and the innovative frames for each video segment. We refer to the proposed modeling framework by CIV (Common and Innovative Visuals). We show how the proposed model can be utilized to find scene change boundaries and extend CIV to videos from multiple scenes. Furthermore, the proposed model is robust to noise and can be used for various video processing applications without relying on motion estimation and detection or image segmentation. Results for object tracking, video editing (object removal, inpainting) and scene change detection are presented to demonstrate the efficiency and the performance of the proposed model.

  13. A discriminative structural similarity measure and its application to video-volume registration for endoscope three-dimensional motion tracking.

    PubMed

    Luo, Xiongbiao; Mori, Kensaku

    2014-06-01

    Endoscope 3-D motion tracking, which seeks to synchronize pre- and intra-operative images in endoscopic interventions, is usually performed as video-volume registration that optimizes the similarity between endoscopic video and pre-operative images. The tracking performance, in turn, depends significantly on whether a similarity measure can successfully characterize the difference between video sequences and volume rendering images driven by pre-operative images. The paper proposes a discriminative structural similarity measure, which uses the degradation of structural information and takes image correlation or structure, luminance, and contrast into consideration, to boost video-volume registration. By applying the proposed similarity measure to endoscope tracking, it was demonstrated to be more accurate and robust than several available similarity measures, e.g., local normalized cross correlation, normalized mutual information, modified mean square error, or normalized sum squared difference. Based on clinical data evaluation, the tracking error was reduced significantly from at least 14.6 mm to 4.5 mm. The processing time was accelerated more than 30 frames per second using graphics processing unit.

  14. Low-power coprocessor for Haar-like feature extraction with pixel-based pipelined architecture

    NASA Astrophysics Data System (ADS)

    Luo, Aiwen; An, Fengwei; Fujita, Yuki; Zhang, Xiangyu; Chen, Lei; Jürgen Mattausch, Hans

    2017-04-01

    Intelligent analysis of image and video data requires image-feature extraction as an important processing capability for machine-vision realization. A coprocessor with pixel-based pipeline (CFEPP) architecture is developed for real-time Haar-like cell-based feature extraction. Synchronization with the image sensor’s pixel frequency and immediate usage of each input pixel for the feature-construction process avoids the dependence on memory-intensive conventional strategies like integral-image construction or frame buffers. One 180 nm CMOS prototype can extract the 1680-dimensional Haar-like feature vectors, applied in the speeded up robust features (SURF) scheme, using an on-chip memory of only 96 kb (kilobit). Additionally, a low power dissipation of only 43.45 mW at 1.8 V supply voltage is achieved during VGA video procession at 120 MHz frequency with more than 325 fps. The Haar-like feature-extraction coprocessor is further evaluated by the practical application of vehicle recognition, achieving the expected high accuracy which is comparable to previous work.

  15. Forensic hash for multimedia information

    NASA Astrophysics Data System (ADS)

    Lu, Wenjun; Varna, Avinash L.; Wu, Min

    2010-01-01

    Digital multimedia such as images and videos are prevalent on today's internet and cause significant social impact, which can be evidenced by the proliferation of social networking sites with user generated contents. Due to the ease of generating and modifying images and videos, it is critical to establish trustworthiness for online multimedia information. In this paper, we propose novel approaches to perform multimedia forensics using compact side information to reconstruct the processing history of a document. We refer to this as FASHION, standing for Forensic hASH for informatION assurance. Based on the Radon transform and scale space theory, the proposed forensic hash is compact and can effectively estimate the parameters of geometric transforms and detect local tampering that an image may have undergone. Forensic hash is designed to answer a broader range of questions regarding the processing history of multimedia data than the simple binary decision from traditional robust image hashing, and also offers more efficient and accurate forensic analysis than multimedia forensic techniques that do not use any side information.

  16. Moving object detection in top-view aerial videos improved by image stacking

    NASA Astrophysics Data System (ADS)

    Teutsch, Michael; Krüger, Wolfgang; Beyerer, Jürgen

    2017-08-01

    Image stacking is a well-known method that is used to improve the quality of images in video data. A set of consecutive images is aligned by applying image registration and warping. In the resulting image stack, each pixel has redundant information about its intensity value. This redundant information can be used to suppress image noise, resharpen blurry images, or even enhance the spatial image resolution as done in super-resolution. Small moving objects in the videos usually get blurred or distorted by image stacking and thus need to be handled explicitly. We use image stacking in an innovative way: image registration is applied to small moving objects only, and image warping blurs the stationary background that surrounds the moving objects. Our video data are coming from a small fixed-wing unmanned aerial vehicle (UAV) that acquires top-view gray-value images of urban scenes. Moving objects are mainly cars but also other vehicles such as motorcycles. The resulting images, after applying our proposed image stacking approach, are used to improve baseline algorithms for vehicle detection and segmentation. We improve precision and recall by up to 0.011, which corresponds to a reduction of the number of false positive and false negative detections by more than 3 per second. Furthermore, we show how our proposed image stacking approach can be implemented efficiently.

  17. Intelligent keyframe extraction for video printing

    NASA Astrophysics Data System (ADS)

    Zhang, Tong

    2004-10-01

    Nowadays most digital cameras have the functionality of taking short video clips, with the length of video ranging from several seconds to a couple of minutes. The purpose of this research is to develop an algorithm which extracts an optimal set of keyframes from each short video clip so that the user could obtain proper video frames to print out. In current video printing systems, keyframes are normally obtained by evenly sampling the video clip over time. Such an approach, however, may not reflect highlights or regions of interest in the video. Keyframes derived in this way may also be improper for video printing in terms of either content or image quality. In this paper, we present an intelligent keyframe extraction approach to derive an improved keyframe set by performing semantic analysis of the video content. For a video clip, a number of video and audio features are analyzed to first generate a candidate keyframe set. These features include accumulative color histogram and color layout differences, camera motion estimation, moving object tracking, face detection and audio event detection. Then, the candidate keyframes are clustered and evaluated to obtain a final keyframe set. The objective is to automatically generate a limited number of keyframes to show different views of the scene; to show different people and their actions in the scene; and to tell the story in the video shot. Moreover, frame extraction for video printing, which is a rather subjective problem, is considered in this work for the first time, and a semi-automatic approach is proposed.

  18. Improving Video Based Heart Rate Monitoring.

    PubMed

    Lin, Jian; Rozado, David; Duenser, Andreas

    2015-01-01

    Non-contact measurements of cardiac pulse can provide robust measurement of heart rate (HR) without the annoyance of attaching electrodes to the body. In this paper we explore a novel and reliable method to carry out video-based HR estimation and propose various performance improvement over existing approaches. The investigated method uses Independent Component Analysis (ICA) to detect the underlying HR signal from a mixed source signal present in the RGB channels of the image. The original ICA algorithm was implemented and several modifications were explored in order to determine which one could be optimal for accurate HR estimation. Using statistical analysis, we compared the cardiac pulse rate estimation from the different methods under comparison on the extracted videos to a commercially available oximeter. We found that some of these methods are quite effective and efficient in terms of improving accuracy and latency of the system. We have made the code of our algorithms openly available to the scientific community so that other researchers can explore how to integrate video-based HR monitoring in novel health technology applications. We conclude by noting that recent advances in video-based HR monitoring permit computers to be aware of a user's psychophysiological status in real time.

  19. Indexed Captioned Searchable Videos: A Learning Companion for STEM Coursework

    NASA Astrophysics Data System (ADS)

    Tuna, Tayfun; Subhlok, Jaspal; Barker, Lecia; Shah, Shishir; Johnson, Olin; Hovey, Christopher

    2017-02-01

    Videos of classroom lectures have proven to be a popular and versatile learning resource. A key shortcoming of the lecture video format is accessing the content of interest hidden in a video. This work meets this challenge with an advanced video framework featuring topical indexing, search, and captioning (ICS videos). Standard optical character recognition (OCR) technology was enhanced with image transformations for extraction of text from video frames to support indexing and search. The images and text on video frames is analyzed to divide lecture videos into topical segments. The ICS video player integrates indexing, search, and captioning in video playback providing instant access to the content of interest. This video framework has been used by more than 70 courses in a variety of STEM disciplines and assessed by more than 4000 students. Results presented from the surveys demonstrate the value of the videos as a learning resource and the role played by videos in a students learning process. Survey results also establish the value of indexing and search features in a video platform for education. This paper reports on the development and evaluation of ICS videos framework and over 5 years of usage experience in several STEM courses.

  20. Dynamic strain distribution of FRP plate under blast loading

    NASA Astrophysics Data System (ADS)

    Saburi, T.; Yoshida, M.; Kubota, S.

    2017-02-01

    The dynamic strain distribution of a fiber re-enforced plastic (FRP) plate under blast loading was investigated using a Digital Image Correlation (DIC) image analysis method. The testing FRP plates were mounted in parallel to each other on a steel frame. 50 g of composition C4 explosive was used as a blast loading source and set in the center of the FRP plates. The dynamic behavior of the FRP plate under blast loading were observed by two high-speed video cameras. The set of two high-speed video image sequences were used to analyze the FRP three-dimensional strain distribution by means of DIC method. A point strain profile extracted from the analyzed strain distribution data was compared with a directly observed strain profile using a strain gauge and it was shown that the strain profile under the blast loading by DIC method is quantitatively accurate.

  1. Estimating populations of nesting brant using aerial videography

    USGS Publications Warehouse

    Anthony, R. Michael; Anderson, W.H.; Sedinger, J.S.; McDonald, L.L.

    1995-01-01

    We mounted a video camcorder in a single-engine aircraft to estimate nesting density along 10-m wide strip transects in black brant colonies on the Yukon Delta National Wildlife Refuge, Alaska during 1990-1992. A global positioning system (GPS) receiver was connected to the video recorder and a laptop computer to locate transects and annotate video tape with time and latitude-longitude at 1-second intervals. About 4-5 hours of flight time were required to record 30-40 minutes of video tape needed to survey large (>5,000 nests in > 10 km2)colonies. We conducted ground searches along transects to locate and identify nests for determining detection rates of nests in video images. Counts of nests from video transects were correlated with actual numbers of nests. Resolution of images was sufficient to detect 81% of known nests (with and without incubating females). Of these, 68% were correctly identified as brant nests. The most common misidentification of known nests was failure of viewers to see the nest that the detected bird was incubating. Unattended nests with exposed eggs, down-covered nests, and nesting brant, cackling Canada geese, and emperor geese were identified in video images. Flushing of incubating geese by survey aircraft was not significant. About 10% of known nests were unoccupied in video images compared to 16% unoccupied nests observed from tower blinds during periods without aircraft disturbance.

  2. Fast image interpolation for motion estimation using graphics hardware

    NASA Astrophysics Data System (ADS)

    Kelly, Francis; Kokaram, Anil

    2004-05-01

    Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.

  3. Optical tracking of embryonic vertebrates behavioural responses using automated time-resolved video-microscopy system

    NASA Astrophysics Data System (ADS)

    Walpitagama, Milanga; Kaslin, Jan; Nugegoda, Dayanthi; Wlodkowic, Donald

    2016-12-01

    The fish embryo toxicity (FET) biotest performed on embryos of zebrafish (Danio rerio) has gained significant popularity as a rapid and inexpensive alternative approach in chemical hazard and risk assessment. The FET was designed to evaluate acute toxicity on embryonic stages of fish exposed to the test chemical. The current standard, similar to most traditional methods for evaluating aquatic toxicity provides, however, little understanding of effects of environmentally relevant concentrations of chemical stressors. We postulate that significant environmental effects such as altered motor functions, physiological alterations reflected in heart rate, effects on development and reproduction can occur at sub-lethal concentrations well below than LC10. Behavioral studies can, therefore, provide a valuable integrative link between physiological and ecological effects. Despite the advantages of behavioral analysis development of behavioral toxicity, biotests is greatly hampered by the lack of dedicated laboratory automation, in particular, user-friendly and automated video microscopy systems. In this work we present a proof-of-concept development of an optical system capable of tracking embryonic vertebrates behavioral responses using automated and vastly miniaturized time-resolved video-microscopy. We have employed miniaturized CMOS cameras to perform high definition video recording and analysis of earliest vertebrate behavioral responses. The main objective was to develop a biocompatible embryo positioning structures that were suitable for high-throughput imaging as well as video capture and video analysis algorithms. This system should support the development of sub-lethal and behavioral markers for accelerated environmental monitoring.

  4. Using computer-based video analysis in the study of fidgety movements.

    PubMed

    Adde, Lars; Helbostad, Jorunn L; Jensenius, Alexander Refsum; Taraldsen, Gunnar; Støen, Ragnhild

    2009-09-01

    Absence of fidgety movements (FM) in high-risk infants is a strong marker for later cerebral palsy (CP). FMs can be classified by the General Movement Assessment (GMA), based on Gestalt perception of the infant's movement pattern. More objective movement analysis may be provided by computer-based technology. The aim of this study was to explore the feasibility of a computer-based video analysis of infants' spontaneous movements in classifying non-fidgety versus fidgety movements. GMA was performed from video material of the fidgety period in 82 term and preterm infants at low and high risks of developing CP. The same videos were analysed using the developed software called General Movement Toolbox (GMT) with visualisation of the infant's movements for qualitative analyses. Variables derived from the calculation of displacement of pixels from one video frame to the next were used for quantitative analyses. Visual representations from GMT showed easily recognisable patterns of FMs. Of the eight quantitative variables derived, the variability in displacement of a spatial centre of active pixels in the image had the highest sensitivity (81.5) and specificity (70.0) in classifying FMs. By setting triage thresholds at 90% sensitivity and specificity for FM, the need for further referral was reduced by 70%. Video recordings can be used for qualitative and quantitative analyses of FMs provided by GMT. GMT is easy to implement in clinical practice, and may provide assistance in detecting infants without FMs.

  5. The study of surgical image quality evaluation system by subjective quality factor method

    NASA Astrophysics Data System (ADS)

    Zhang, Jian J.; Xuan, Jason R.; Yang, Xirong; Yu, Honggang; Koullick, Edouard

    2016-03-01

    GreenLightTM procedure is an effective and economical way of treatment of benign prostate hyperplasia (BPH); there are almost a million of patients treated with GreenLightTM worldwide. During the surgical procedure, the surgeon or physician will rely on the monitoring video system to survey and confirm the surgical progress. There are a few obstructions that could greatly affect the image quality of the monitoring video, like laser glare by the tissue and body fluid, air bubbles and debris generated by tissue evaporation, and bleeding, just to name a few. In order to improve the physician's visual experience of a laser surgical procedure, the system performance parameter related to image quality needs to be well defined. However, since image quality is the integrated set of perceptions of the overall degree of excellence of an image, or in other words, image quality is the perceptually weighted combination of significant attributes (contrast, graininess …) of an image when considered in its marketplace or application, there is no standard definition on overall image or video quality especially for the no-reference case (without a standard chart as reference). In this study, Subjective Quality Factor (SQF) and acutance are used for no-reference image quality evaluation. Basic image quality parameters, like sharpness, color accuracy, size of obstruction and transmission of obstruction, are used as subparameter to define the rating scale for image quality evaluation or comparison. Sample image groups were evaluated by human observers according to the rating scale. Surveys of physician groups were also conducted with lab generated sample videos. The study shows that human subjective perception is a trustworthy way of image quality evaluation. More systematic investigation on the relationship between video quality and image quality of each frame will be conducted as a future study.

  6. Pre-processing SAR image stream to facilitate compression for transport on bandwidth-limited-link

    DOEpatents

    Rush, Bobby G.; Riley, Robert

    2015-09-29

    Pre-processing is applied to a raw VideoSAR (or similar near-video rate) product to transform the image frame sequence into a product that resembles more closely the type of product for which conventional video codecs are designed, while sufficiently maintaining utility and visual quality of the product delivered by the codec.

  7. Data compression techniques applied to high resolution high frame rate video technology

    NASA Technical Reports Server (NTRS)

    Hartz, William G.; Alexovich, Robert E.; Neustadter, Marc S.

    1989-01-01

    An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended.

  8. Software-codec-based full motion video conferencing on the PC using visual pattern image sequence coding

    NASA Astrophysics Data System (ADS)

    Barnett, Barry S.; Bovik, Alan C.

    1995-04-01

    This paper presents a real time full motion video conferencing system based on the Visual Pattern Image Sequence Coding (VPISC) software codec. The prototype system hardware is comprised of two personal computers, two camcorders, two frame grabbers, and an ethernet connection. The prototype system software has a simple structure. It runs under the Disk Operating System, and includes a user interface, a video I/O interface, an event driven network interface, and a free running or frame synchronous video codec that also acts as the controller for the video and network interfaces. Two video coders have been tested in this system. Simple implementations of Visual Pattern Image Coding and VPISC have both proven to support full motion video conferencing with good visual quality. Future work will concentrate on expanding this prototype to support the motion compensated version of VPISC, as well as encompassing point-to-point modem I/O and multiple network protocols. The application will be ported to multiple hardware platforms and operating systems. The motivation for developing this prototype system is to demonstrate the practicality of software based real time video codecs. Furthermore, software video codecs are not only cheaper, but are more flexible system solutions because they enable different computer platforms to exchange encoded video information without requiring on-board protocol compatible video codex hardware. Software based solutions enable true low cost video conferencing that fits the `open systems' model of interoperability that is so important for building portable hardware and software applications.

  9. Tissue-plastinated vs. celloidin-embedded large serial sections in video, analog and digital photographic on-screen reproduction: a preliminary step to exact virtual 3D modelling, exemplified in the normal midface and cleft-lip and palate.

    PubMed

    Landes, Constantin A; Weichert, Frank; Geis, Philipp; Wernstedt, Katrin; Wilde, Anja; Fritsch, Helga; Wagner, Mathias

    2005-08-01

    This study analyses tissue-plastinated vs. celloidin-embedded large serial sections, their inherent artefacts and aptitude with common video, analog or digital photographic on-screen reproduction. Subsequent virtual 3D microanatomical reconstruction will increase our knowledge of normal and pathological microanatomy for cleft-lip-palate (clp) reconstructive surgery. Of 18 fetal (six clp, 12 control) specimens, six randomized specimens (two clp) were BiodurE12-plastinated, sawn, burnished 90 microm thick transversely (five) or frontally (one), stained with azureII/methylene blue, and counterstained with basic-fuchsin (TP-AMF). Twelve remaining specimens (four clp) were celloidin-embedded, microtome-sectioned 75 microm thick transversely (ten) or frontally (two), and stained with haematoxylin-eosin (CE-HE). Computed-planimetry gauged artefacts, structure differentiation was compared with light microscopy on video, analog and digital photography. Total artefact was 0.9% (TP-AMF) and 2.1% (CE-HE); TP-AMF showed higher colour contrast, gamut and luminance, and CE-HE more red contrast, saturation and hue (P < 0.4). All (100%) structures of interest were light microscopically discerned, 83% on video, 76% on analog photography and 98% in digital photography. Computed image analysis assessed the greatest colour contrast, gamut, luminance and saturation on video; the most detailed, colour-balanced and sharpest images were obtained with digital photography (P < 0.02). TP-AMF retained spatial oversight, covered the entire area of interest and should be combined in different specimens with CE-HE which enables more refined muscle fibre reproduction. Digital photography is preferred for on-screen analysis.

  10. Non-contact cardiac pulse rate estimation based on web-camera

    NASA Astrophysics Data System (ADS)

    Wang, Yingzhi; Han, Tailin

    2015-12-01

    In this paper, we introduce a new methodology of non-contact cardiac pulse rate estimation based on the imaging Photoplethysmography (iPPG) and blind source separation. This novel's approach can be applied to color video recordings of the human face and is based on automatic face tracking along with blind source separation of the color channels into RGB three-channel component. First of all, we should do some pre-processings of the data which can be got from color video such as normalization and sphering. We can use spectrum analysis to estimate the cardiac pulse rate by Independent Component Analysis (ICA) and JADE algorithm. With Bland-Altman and correlation analysis, we compared the cardiac pulse rate extracted from videos recorded by a basic webcam to a Commercial pulse oximetry sensors and achieved high accuracy and correlation. Root mean square error for the estimated results is 2.06bpm, which indicates that the algorithm can realize the non-contact measurements of cardiac pulse rate.

  11. Violence-related content in video game may lead to functional connectivity changes in brain networks as revealed by fMRI-ICA in young men.

    PubMed

    Zvyagintsev, M; Klasen, M; Weber, R; Sarkheil, P; Esposito, F; Mathiak, K A; Schwenzer, M; Mathiak, K

    2016-04-21

    In violent video games, players engage in virtual aggressive behaviors. Exposure to virtual aggressive behavior induces short-term changes in players' behavior. In a previous study, a violence-related version of the racing game "Carmageddon TDR2000" increased aggressive affects, cognitions, and behaviors compared to its non-violence-related version. This study investigates the differences in neural network activity during the playing of both versions of the video game. Functional magnetic resonance imaging (fMRI) recorded ongoing brain activity of 18 young men playing the violence-related and the non-violence-related version of the video game Carmageddon. Image time series were decomposed into functional connectivity (FC) patterns using independent component analysis (ICA) and template-matching yielded a mapping to established functional brain networks. The FC patterns revealed a decrease in connectivity within 6 brain networks during the violence-related compared to the non-violence-related condition: three sensory-motor networks, the reward network, the default mode network (DMN), and the right-lateralized frontoparietal network. Playing violent racing games may change functional brain connectivity, in particular and even after controlling for event frequency, in the reward network and the DMN. These changes may underlie the short-term increase of aggressive affects, cognitions, and behaviors as observed after playing violent video games. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  12. Preliminary clinical evaluation of automated analysis of the sublingual microcirculation in the assessment of patients with septic shock: Comparison of automated versus semi-automated software.

    PubMed

    Sharawy, Nivin; Mukhtar, Ahmed; Islam, Sufia; Mahrous, Reham; Mohamed, Hassan; Ali, Mohamed; Hakeem, Amr A; Hossny, Osama; Refaa, Amera; Saka, Ahmed; Cerny, Vladimir; Whynot, Sara; George, Ronald B; Lehmann, Christian

    2017-01-01

    The outcome of patients in septic shock has been shown to be related to changes within the microcirculation. Modern imaging technologies are available to generate high resolution video recordings of the microcirculation in humans. However, evaluation of the microcirculation is not yet implemented in the routine clinical monitoring of critically ill patients. This is mainly due to large amount of time and user interaction required by the current video analysis software. The aim of this study was to validate a newly developed automated method (CCTools®) for microcirculatory analysis of sublingual capillary perfusion in septic patients in comparison to standard semi-automated software (AVA3®). 204 videos from 47 patients were recorded using incident dark field (IDF) imaging. Total vessel density (TVD), proportion of perfused vessels (PPV), perfused vessel density (PVD), microvascular flow index (MFI) and heterogeneity index (HI) were measured using AVA3® and CCTools®. Significant differences between the numeric results obtained by the two different software packages were observed. The values for TVD, PVD and MFI were statistically related though. The automated software technique successes to show septic shock induced microcirculation alterations in near real time. However, we found wide degrees of agreement between AVA3® and CCTools® values due to several technical factors that should be considered in the future studies.

  13. Spectrally And Temporally Resolved Low-Light Level Video Microscopy

    NASA Astrophysics Data System (ADS)

    Wampler, John E.; Furukawa, Ruth; Fechheimer, Marcus

    1989-12-01

    The IDG law-light video microscope system was designed to aid studies of localization of subcellular luminescence sources and stimulus/response coupling in single living cells using luminescent probes. Much of the motivation for design of this instrument system came from the pioneering efforts of Dr. Reynolds (Reynolds, Q. Rev. Biophys. 5, 295-347; Reynolds and Taylor, Bioscience 30, 586-592) who showed the value of intensified video camera systems for detection and localizion of fluorescence and bioluminescence signals from biological tissues. Our instrument system has essentially two roles, 1) localization and quantitation of very weak bioluminescence signals and 2) quantitation of intracellular environmental characteristics such as pH and calcium ion concentrations using fluorescent and bioluminescent probes. The instrument system exhibits over one million fold operating range allowing visualization and enhancement of quantum limited images with quantum limited response, spectral analysis of fluorescence signals, and transmitted light imaging. The computer control of the system implements rapid switching between light regimes, spatially resolved spectral scanning, and digital data processing for spectral shape analysis and for detailed analysis of the statistical distribution of single cell measurements. The system design and software algorithms used by the system are summarized. These design criteria are illustrated with examples taken from studies of bioluminescence, applications of bioluminescence to study developmental processes and gene expression in single living cells, and applications of fluorescent probes to study stimulus/response coupling in living cells.

  14. USB video image controller used in CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Zhang, Wenxuan; Wang, Yuxia; Fan, Hong

    2002-09-01

    CMOS process is mainstream technique in VLSI, possesses high integration. SE402 is multifunction microcontroller, which integrates image data I/O ports, clock control, exposure control and digital signal processing into one chip. SE402 reduces the number of chips and PCB's room. The paper studies emphatically on USB video image controller used in CMOS image sensor and give the application on digital still camera.

  15. More About The Video Event Trigger

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    1996-01-01

    Report presents additional information about system described in "Video Event Trigger" (LEW-15076). Digital electronic system processes video-image data to generate trigger signal when image shows significant change, such as motion, or appearance, disappearance, change in color, brightness, or dilation of object. Potential uses include monitoring of hallways, parking lots, and other areas during hours when supposed unoccupied, looking for fires, tracking airplanes or other moving objects, identification of missing or defective parts on production lines, and video recording of automobile crash tests.

  16. Database for the collection and analysis of clinical data and images of neoplasms of the sinonasal tract.

    PubMed

    Trimarchi, Matteo; Lund, Valerie J; Nicolai, Piero; Pini, Massimiliano; Senna, Massimo; Howard, David J

    2004-04-01

    The Neoplasms of the Sinonasal Tract software package (NSNT v 1.0) implements a complete visual database for patients with sinonasal neoplasia, facilitating standardization of data and statistical analysis. The software, which is compatible with the Macintosh and Windows platforms, provides multiuser application with a dedicated server (on Windows NT or 2000 or Macintosh OS 9 or X and a network of clients) together with web access, if required. The system hardware consists of an Apple Power Macintosh G4500 MHz computer with PCI bus, 256 Mb of RAM plus 60 Gb hard disk, or any IBM-compatible computer with a Pentium 2 processor. Image acquisition may be performed with different frame-grabber cards for analog or digital video input of different standards (PAL, SECAM, or NTSC) and levels of quality (VHS, S-VHS, Betacam, Mini DV, DV). The visual database is based on 4th Dimension by 4D Inc, and video compression is made in real-time MPEG format. Six sections have been developed: demographics, symptoms, extent of disease, radiology, treatment, and follow-up. Acquisition of data includes computed tomography and magnetic resonance imaging, histology, and endoscopy images, allowing sequential comparison. Statistical analysis integral to the program provides Kaplan-Meier survival curves. The development of a dedicated, user-friendly database for sinonasal neoplasia facilitates a multicenter network and has obvious clinical and research benefits.

  17. A Real-Time Image Acquisition And Processing System For A RISC-Based Microcomputer

    NASA Astrophysics Data System (ADS)

    Luckman, Adrian J.; Allinson, Nigel M.

    1989-03-01

    A low cost image acquisition and processing system has been developed for the Acorn Archimedes microcomputer. Using a Reduced Instruction Set Computer (RISC) architecture, the ARM (Acorn Risc Machine) processor provides instruction speeds suitable for image processing applications. The associated improvement in data transfer rate has allowed real-time video image acquisition without the need for frame-store memory external to the microcomputer. The system is comprised of real-time video digitising hardware which interfaces directly to the Archimedes memory, and software to provide an integrated image acquisition and processing environment. The hardware can digitise a video signal at up to 640 samples per video line with programmable parameters such as sampling rate and gain. Software support includes a work environment for image capture and processing with pixel, neighbourhood and global operators. A friendly user interface is provided with the help of the Archimedes Operating System WIMP (Windows, Icons, Mouse and Pointer) Manager. Windows provide a convenient way of handling images on the screen and program control is directed mostly by pop-up menus.

  18. Computer-assisted image analysis to quantify daily growth rates of broiler chickens.

    PubMed

    De Wet, L; Vranken, E; Chedad, A; Aerts, J M; Ceunen, J; Berckmans, D

    2003-09-01

    1. The objective was to investigate the possibility of detecting daily body weight changes of broiler chickens with computer-assisted image analysis. 2. The experiment included 50 broiler chickens reared under commercial conditions. Ten out of 50 chickens were randomly selected and video recorded (upper view) 18 times during the 42-d growing period. The number of surface and periphery pixels from the images was used to derive a relationship between body dimension and live weight. 3. The relative error in weight estimation, expressed in terms of the standard deviation of the residuals from image surface data was 10%, while it was found to be 15% for the image periphery data. 4. Image-processing systems could be developed to assist the farmer in making important management and marketing decisions.

  19. Electronic magnification and perceived contrast of video

    PubMed Central

    Haun, Andrew; Woods, Russell L; Peli, Eli

    2012-01-01

    Electronic magnification of an image results in a decrease in its perceived contrast. The decrease in perceived contrast could be due to a perceived blur or to limited sampling of the range of contrasts in the original image. We measured the effect on perceived contrast of magnification in two contexts: either a small video was enlarged to fill a larger area, or a portion of a larger video was enlarged to fill the same area as the original. Subjects attenuated the source video contrast to match the perceived contrast of the magnified videos, with the effect increasing with magnification and decreasing with viewing distance. These effects are consistent with expectations based on both the contrast statistics of natural images and the contrast sensitivity of the human visual system. We demonstrate that local regions within videos usually have lower physical contrast than the whole, and that this difference accounts for a minor part of the perceived differences. Instead, visibility of ‘missing content’ (blur) in a video is misinterpreted as a decrease in contrast. We detail how the effects of magnification on perceived contrast can be measured while avoiding confounding factors. PMID:23483111

  20. A professional and cost effective digital video editing and image storage system for the operating room.

    PubMed

    Scollato, A; Perrini, P; Benedetto, N; Di Lorenzo, N

    2007-06-01

    We propose an easy-to-construct digital video editing system ideal to produce video documentation and still images. A digital video editing system applicable to many video sources in the operating room is described in detail. The proposed system has proved easy to use and permits one to obtain videography quickly and easily. Mixing different streams of video input from all the devices in use in the operating room, the application of filters and effects produces a final, professional end-product. Recording on a DVD provides an inexpensive, portable and easy-to-use medium to store or re-edit or tape at a later time. From stored videography it is easy to extract high-quality, still images useful for teaching, presentations and publications. In conclusion digital videography and still photography can easily be recorded by the proposed system, producing high-quality video recording. The use of firewire ports provides good compatibility with next-generation hardware and software. The high standard of quality makes the proposed system one of the lowest priced products available today.

  1. Enhance Video Film using Retnix method

    NASA Astrophysics Data System (ADS)

    Awad, Rasha; Al-Zuky, Ali A.; Al-Saleh, Anwar H.; Mohamad, Haidar J.

    2018-05-01

    An enhancement technique used to improve the studied video quality. Algorithms like mean and standard deviation are used as a criterion within this paper, and it applied for each video clip that divided into 80 images. The studied filming environment has different light intensity (315, 566, and 644Lux). This different environment gives similar reality to the outdoor filming. The outputs of the suggested algorithm are compared with the results before applying it. This method is applied into two ways: first, it is applied for the full video clip to get the enhanced film; second, it is applied for every individual image to get the enhanced image then compiler them to get the enhanced film. This paper shows that the enhancement technique gives good quality video film depending on a statistical method, and it is recommended to use it in different application.

  2. Lidar-Incorporated Traffic Sign Detection from Video Log Images of Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Li, Y.; Fan, J.; Huang, Y.; Chen, Z.

    2016-06-01

    Mobile Mapping System (MMS) simultaneously collects the Lidar points and video log images in a scenario with the laser profiler and digital camera. Besides the textural details of video log images, it also captures the 3D geometric shape of point cloud. It is widely used to survey the street view and roadside transportation infrastructure, such as traffic sign, guardrail, etc., in many transportation agencies. Although many literature on traffic sign detection are available, they only focus on either Lidar or imagery data of traffic sign. Based on the well-calibrated extrinsic parameters of MMS, 3D Lidar points are, the first time, incorporated into 2D video log images to enhance the detection of traffic sign both physically and visually. Based on the local elevation, the 3D pavement area is first located. Within a certain distance and height of the pavement, points of the overhead and roadside traffic signs can be obtained according to the setup specification of traffic signs in different transportation agencies. The 3D candidate planes of traffic signs are then fitted using the RANSAC plane-fitting of those points. By projecting the candidate planes onto the image, Regions of Interest (ROIs) of traffic signs are found physically with the geometric constraints between laser profiling and camera imaging. The Random forest learning of the visual color and shape features of traffic signs is adopted to validate the sign ROIs from the video log images. The sequential occurrence of a traffic sign among consecutive video log images are defined by the geometric constraint of the imaging geometry and GPS movement. Candidate ROIs are predicted in this temporal context to double-check the salient traffic sign among video log images. The proposed algorithm is tested on a diverse set of scenarios on the interstate highway G-4 near Beijing, China under varying lighting conditions and occlusions. Experimental results show the proposed algorithm enhances the rate of detecting traffic signs with the incorporation of the 3D planar constraint of their Lidar points. It is promising for the robust and large-scale survey of most transportation infrastructure with the application of MMS.

  3. Image stacking approach to increase sensitivity of fluorescence detection using a low cost complementary metal-oxide-semiconductor (CMOS) webcam.

    PubMed

    Balsam, Joshua; Bruck, Hugh Alan; Kostov, Yordan; Rasooly, Avraham

    2012-01-01

    Optical technologies are important for biological analysis. Current biomedical optical analyses rely on high-cost, high-sensitivity optical detectors such as photomultipliers, avalanched photodiodes or cooled CCD cameras. In contrast, Webcams, mobile phones and other popular consumer electronics use lower-sensitivity, lower-cost optical components such as photodiodes or CMOS sensors. In order for consumer electronics devices, such as webcams, to be useful for biomedical analysis, they must have increased sensitivity. We combined two strategies to increase the sensitivity of CMOS-based fluorescence detector. We captured hundreds of low sensitivity images using a Webcam in video mode, instead of a single image typically used in cooled CCD devices.We then used a computational approach consisting of an image stacking algorithm to remove the noise by combining all of the images into a single image. While video mode is widely used for dynamic scene imaging (e.g. movies or time-lapse photography), it is not used to capture a single static image, which removes noise and increases sensitivity by more than thirty fold. The portable, battery-operated Webcam-based fluorometer system developed here consists of five modules: (1) a low cost CMOS Webcam to monitor light emission, (2) a plate to perform assays, (3) filters and multi-wavelength LED illuminator for fluorophore excitation, (4) a portable computer to acquire and analyze images, and (5) image stacking software for image enhancement. The samples consisted of various concentrations of fluorescein, ranging from 30 μM to 1000 μM, in a 36-well miniature plate. In the single frame mode, the fluorometer's limit-of-detection (LOD) for fluorescein is ∼1000 μM, which is relatively insensitive. However, when used in video mode combined with image stacking enhancement, the LOD is dramatically reduced to 30 μM, sensitivity which is similar to that of state-of-the-art ELISA plate photomultiplier-based readers. Numerous medical diagnostics assays rely on optical and fluorescence readers. Our novel combination of detection technologies, which is new to biodetection may enable the development of new low cost optical detectors based on an inexpensive Webcam (<$10). It has the potential to form the basis for high sensitivity, low cost medical diagnostics in resource-poor settings.

  4. Image stacking approach to increase sensitivity of fluorescence detection using a low cost complementary metal-oxide-semiconductor (CMOS) webcam

    PubMed Central

    Balsam, Joshua; Bruck, Hugh Alan; Kostov, Yordan; Rasooly, Avraham

    2013-01-01

    Optical technologies are important for biological analysis. Current biomedical optical analyses rely on high-cost, high-sensitivity optical detectors such as photomultipliers, avalanched photodiodes or cooled CCD cameras. In contrast, Webcams, mobile phones and other popular consumer electronics use lower-sensitivity, lower-cost optical components such as photodiodes or CMOS sensors. In order for consumer electronics devices, such as webcams, to be useful for biomedical analysis, they must have increased sensitivity. We combined two strategies to increase the sensitivity of CMOS-based fluorescence detector. We captured hundreds of low sensitivity images using a Webcam in video mode, instead of a single image typically used in cooled CCD devices.We then used a computational approach consisting of an image stacking algorithm to remove the noise by combining all of the images into a single image. While video mode is widely used for dynamic scene imaging (e.g. movies or time-lapse photography), it is not used to capture a single static image, which removes noise and increases sensitivity by more than thirty fold. The portable, battery-operated Webcam-based fluorometer system developed here consists of five modules: (1) a low cost CMOS Webcam to monitor light emission, (2) a plate to perform assays, (3) filters and multi-wavelength LED illuminator for fluorophore excitation, (4) a portable computer to acquire and analyze images, and (5) image stacking software for image enhancement. The samples consisted of various concentrations of fluorescein, ranging from 30 μM to 1000 μM, in a 36-well miniature plate. In the single frame mode, the fluorometer's limit-of-detection (LOD) for fluorescein is ∼1000 μM, which is relatively insensitive. However, when used in video mode combined with image stacking enhancement, the LOD is dramatically reduced to 30 μM, sensitivity which is similar to that of state-of-the-art ELISA plate photomultiplier-based readers. Numerous medical diagnostics assays rely on optical and fluorescence readers. Our novel combination of detection technologies, which is new to biodetection may enable the development of new low cost optical detectors based on an inexpensive Webcam (<$10). It has the potential to form the basis for high sensitivity, low cost medical diagnostics in resource-poor settings. PMID:23990697

  5. Online coupled camera pose estimation and dense reconstruction from video

    DOEpatents

    Medioni, Gerard; Kang, Zhuoliang

    2016-11-01

    A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.

  6. Video-mosaicking of in vivo reflectance confocal microscopy images for noninvasive examination of skin lesion (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kose, Kivanc; Gou, Mengran; Yelamos, Oriol; Cordova, Miguel A.; Rossi, Anthony; Nehal, Kishwer S.; Camps, Octavia I.; Dy, Jennifer G.; Brooks, Dana H.; Rajadhyaksha, Milind

    2017-02-01

    In this report we describe a computer vision based pipeline to convert in-vivo reflectance confocal microscopy (RCM) videos collected with a handheld system into large field of view (FOV) mosaics. For many applications such as imaging of hard to access lesions, intraoperative assessment of MOHS margins, or delineation of lesion margins beyond clinical borders, raster scan based mosaicing techniques have clinically significant limitations. In such cases, clinicians often capture RCM videos by freely moving a handheld microscope over the area of interest, but the resulting videos lose large-scale spatial relationships. Videomosaicking is a standard computational imaging technique to register, and stitch together consecutive frames of videos into large FOV high resolution mosaics. However, mosaicing RCM videos collected in-vivo has unique challenges: (i) tissue may deform or warp due to physical contact with the microscope objective lens, (ii) discontinuities or "jumps" between consecutive images and motion blur artifacts may occur, due to manual operation of the microscope, and (iii) optical sectioning and resolution may vary between consecutive images due to scattering and aberrations induced by changes in imaging depth and tissue morphology. We addressed these challenges by adapting or developing new algorithmic methods for videomosaicking, specifically by modeling non-rigid deformations, followed by automatically detecting discontinuities (cut locations) and, finally, applying a data-driven image stitching approach that fully preserves resolution and tissue morphologic detail without imposing arbitrary pre-defined boundaries. We will present example mosaics obtained by clinical imaging of both melanoma and non-melanoma skin cancers. The ability to combine freehand mosaicing for handheld microscopes with preserved cellular resolution will have high impact application in diverse clinical settings, including low-resource healthcare systems.

  7. Video based object representation and classification using multiple covariance matrices.

    PubMed

    Zhang, Yurong; Liu, Quan

    2017-01-01

    Video based object recognition and classification has been widely studied in computer vision and image processing area. One main issue of this task is to develop an effective representation for video. This problem can generally be formulated as image set representation. In this paper, we present a new method called Multiple Covariance Discriminative Learning (MCDL) for image set representation and classification problem. The core idea of MCDL is to represent an image set using multiple covariance matrices with each covariance matrix representing one cluster of images. Firstly, we use the Nonnegative Matrix Factorization (NMF) method to do image clustering within each image set, and then adopt Covariance Discriminative Learning on each cluster (subset) of images. At last, we adopt KLDA and nearest neighborhood classification method for image set classification. Promising experimental results on several datasets show the effectiveness of our MCDL method.

  8. Standardization of Freeze Frame TV Codecs

    DTIC Science & Technology

    1990-06-01

    Kodak SV9600 Still Video Transceiver Colorado Video, Inc.286 Digital Transceiver Image Data Corp. CP-200 Photophone Interand Corp. DISCON Imagephone...error recovery Proprietary Proprby retransmission errorIMAGE BUILD-UP Sequential Sequential PHOTOPHONE Video Teleconferenc- DISCON Imaqephone GENERIC...and information transfer is effected among terminals. An indication of the function and power of these commands can be obtained by reviewing Table

  9. Fluorescent screens and image processing for the APS linac test stand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berg, W.; Ko, K.

    A fluorescent screen was used to monitor relative beam position and spot size of a 56-MeV electron beam in the linac test stand. A chromium doped alumina ceramic screen inserted into the beam was monitored by a video camera. The resulting image was captured using a frame grabber and stored into memory. Reconstruction and analysis of the stored image was performed using PV-WAVE. This paper will discuss the hardware and software implementation of the fluorescent screen and imaging system. Proposed improvements for the APS linac fluorescent screens and image processing will also be discussed.

  10. Research Instruments

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The GENETI-SCANNER, newest product of Perceptive Scientific Instruments, Inc. (PSI), rapidly scans slides, locates, digitizes, measures and classifies specific objects and events in research and diagnostic applications. Founded by former NASA employees, PSI's primary product line is based on NASA image processing technology. The instruments karyotype - a process employed in analysis and classification of chromosomes - using a video camera mounted on a microscope. Images are digitized, enabling chromosome image enhancement. The system enables karyotyping to be done significantly faster, increasing productivity and lowering costs. Product is no longer being manufactured.

  11. Citrus Inventory

    NASA Technical Reports Server (NTRS)

    1994-01-01

    An aerial color infrared (CIR) mapping system developed by Kennedy Space Center enables Florida's Charlotte County to accurately appraise its citrus groves while reducing appraisal costs. The technology was further advanced by development of a dual video system making it possible to simultaneously view images of the same area and detect changes. An image analysis system automatically surveys and photo interprets grove images as well as automatically counts trees and reports totals. The system, which saves both time and money, has potential beyond citrus grove valuation.

  12. The impact of thin models in music videos on adolescent girls' body dissatisfaction.

    PubMed

    Bell, Beth T; Lawton, Rebecca; Dittmar, Helga

    2007-06-01

    Music videos are a particularly influential, new form of mass media for adolescents, which include the depiction of scantily clad female models whose bodies epitomise the ultra-thin sociocultural ideal for young women. The present study is the first exposure experiment that examines the impact of thin models in music videos on the body dissatisfaction of 16-19-year-old adolescent girls (n=87). First, participants completed measures of positive and negative affect, body image, and self-esteem. Under the guise of a memory experiment, they then either watched three music videos, listened to three songs (from the videos), or learned a list of words. Affect and body image were assessed afterwards. In contrast to the music listening and word-learning conditions, girls who watched the music videos reported significantly elevated scores on an adaptation of the Body Image States Scale after exposure, indicating increased body dissatisfaction. Self-esteem was not found to be a significant moderator of this relationship. Implications and future research are discussed.

  13. Overview of image processing tools to extract physical information from JET videos

    NASA Astrophysics Data System (ADS)

    Craciunescu, T.; Murari, A.; Gelfusa, M.; Tiseanu, I.; Zoita, V.; EFDA Contributors, JET

    2014-11-01

    In magnetic confinement nuclear fusion devices such as JET, the last few years have witnessed a significant increase in the use of digital imagery, not only for the surveying and control of experiments, but also for the physical interpretation of results. More than 25 cameras are routinely used for imaging on JET in the infrared (IR) and visible spectral regions. These cameras can produce up to tens of Gbytes per shot and their information content can be very different, depending on the experimental conditions. However, the relevant information about the underlying physical processes is generally of much reduced dimensionality compared to the recorded data. The extraction of this information, which allows full exploitation of these diagnostics, is a challenging task. The image analysis consists, in most cases, of inverse problems which are typically ill-posed mathematically. The typology of objects to be analysed is very wide, and usually the images are affected by noise, low levels of contrast, low grey-level in-depth resolution, reshaping of moving objects, etc. Moreover, the plasma events have time constants of ms or tens of ms, which imposes tough conditions for real-time applications. On JET, in the last few years new tools and methods have been developed for physical information retrieval. The methodology of optical flow has allowed, under certain assumptions, the derivation of information about the dynamics of video objects associated with different physical phenomena, such as instabilities, pellets and filaments. The approach has been extended in order to approximate the optical flow within the MPEG compressed domain, allowing the manipulation of the large JET video databases and, in specific cases, even real-time data processing. The fast visible camera may provide new information that is potentially useful for disruption prediction. A set of methods, based on the extraction of structural information from the visual scene, have been developed for the automatic detection of MARFE (multifaceted asymmetric radiation from the edge) occurrences, which precede disruptions in density limit discharges. An original spot detection method has been developed for large surveys of videos in JET, and for the assessment of the long term trends in their evolution. The analysis of JET IR videos, recorded during JET operation with the ITER-like wall, allows the retrieval of data and hence correlation of the evolution of spots properties with macroscopic events, in particular series of intentional disruptions.

  14. JSC Shuttle Mission Simulator (SMS) visual system payload bay video image

    NASA Technical Reports Server (NTRS)

    1981-01-01

    This space shuttle orbiter payload bay (PLB) video image is used in JSC's Fixed Based (FB) Shuttle Mission Simulator (SMS). The image is projected inside the FB-SMS crew compartment during mission simulation training. The FB-SMS is located in the Mission Simulation and Training Facility Bldg 5.

  15. Algorithm for Video Summarization of Bronchoscopy Procedures

    PubMed Central

    2011-01-01

    Background The duration of bronchoscopy examinations varies considerably depending on the diagnostic and therapeutic procedures used. It can last more than 20 minutes if a complex diagnostic work-up is included. With wide access to videobronchoscopy, the whole procedure can be recorded as a video sequence. Common practice relies on an active attitude of the bronchoscopist who initiates the recording process and usually chooses to archive only selected views and sequences. However, it may be important to record the full bronchoscopy procedure as documentation when liability issues are at stake. Furthermore, an automatic recording of the whole procedure enables the bronchoscopist to focus solely on the performed procedures. Video recordings registered during bronchoscopies include a considerable number of frames of poor quality due to blurry or unfocused images. It seems that such frames are unavoidable due to the relatively tight endobronchial space, rapid movements of the respiratory tract due to breathing or coughing, and secretions which occur commonly in the bronchi, especially in patients suffering from pulmonary disorders. Methods The use of recorded bronchoscopy video sequences for diagnostic, reference and educational purposes could be considerably extended with efficient, flexible summarization algorithms. Thus, the authors developed a prototype system to create shortcuts (called summaries or abstracts) of bronchoscopy video recordings. Such a system, based on models described in previously published papers, employs image analysis methods to exclude frames or sequences of limited diagnostic or education value. Results The algorithm for the selection or exclusion of specific frames or shots from video sequences recorded during bronchoscopy procedures is based on several criteria, including automatic detection of "non-informative", frames showing the branching of the airways and frames including pathological lesions. Conclusions The paper focuses on the challenge of generating summaries of bronchoscopy video recordings. PMID:22185344

  16. Integrated bronchoscopic video tracking and 3D CT registration for virtual bronchoscopy

    NASA Astrophysics Data System (ADS)

    Higgins, William E.; Helferty, James P.; Padfield, Dirk R.

    2003-05-01

    Lung cancer assessment involves an initial evaluation of 3D CT image data followed by interventional bronchoscopy. The physician, with only a mental image inferred from the 3D CT data, must guide the bronchoscope through the bronchial tree to sites of interest. Unfortunately, this procedure depends heavily on the physician's ability to mentally reconstruct the 3D position of the bronchoscope within the airways. In order to assist physicians in performing biopsies of interest, we have developed a method that integrates live bronchoscopic video tracking and 3D CT registration. The proposed method is integrated into a system we have been devising for virtual-bronchoscopic analysis and guidance for lung-cancer assessment. Previously, the system relied on a method that only used registration of the live bronchoscopic video to corresponding virtual endoluminal views derived from the 3D CT data. This procedure only performs the registration at manually selected sites; it does not draw upon the motion information inherent in the bronchoscopic video. Further, the registration procedure is slow. The proposed method has the following advantages: (1) it tracks the 3D motion of the bronchoscope using the bronchoscopic video; (2) it uses the tracked 3D trajectory of the bronchoscope to assist in locating sites in the 3D CT "virtual world" to perform the registration. In addition, the method incorporates techniques to: (1) detect and exclude corrupted video frames (to help make the video tracking more robust); (2) accelerate the computation of the many 3D virtual endoluminal renderings (thus, speeding up the registration process). We have tested the integrated tracking-registration method on a human airway-tree phantom and on real human data.

  17. SpotMetrics: An Open-Source Image-Analysis Software Plugin for Automatic Chromatophore Detection and Measurement.

    PubMed

    Hadjisolomou, Stavros P; El-Haddad, George

    2017-01-01

    Coleoid cephalopods (squid, octopus, and sepia) are renowned for their elaborate body patterning capabilities, which are employed for camouflage or communication. The specific chromatic appearance of a cephalopod, at any given moment, is a direct result of the combined action of their intradermal pigmented chromatophore organs and reflecting cells. Therefore, a lot can be learned about the cephalopod coloration system by video recording and analyzing the activation of individual chromatophores in time. The fact that adult cephalopods have small chromatophores, up to several hundred thousand in number, makes measurement and analysis over several seconds a difficult task. However, current advancements in videography enable high-resolution and high framerate recording, which can be used to record chromatophore activity in more detail and accuracy in both space and time domains. In turn, the additional pixel information and extra frames per video from such recordings result in large video files of several gigabytes, even when the recording spans only few minutes. We created a software plugin, "SpotMetrics," that can automatically analyze high resolution, high framerate video of chromatophore organ activation in time. This image analysis software can track hundreds of individual chromatophores over several hundred frames to provide measurements of size and color. This software may also be used to measure differences in chromatophore activation during different behaviors which will contribute to our understanding of the cephalopod sensorimotor integration system. In addition, this software can potentially be utilized to detect numbers of round objects and size changes in time, such as eye pupil size or number of bacteria in a sample. Thus, we are making this software plugin freely available as open-source because we believe it will be of benefit to other colleagues both in the cephalopod biology field and also within other disciplines.

  18. Characterization of Axial Inducer Cavitation Instabilities via High Speed Video Recordings

    NASA Technical Reports Server (NTRS)

    Arellano, Patrick; Peneda, Marinelle; Ferguson, Thomas; Zoladz, Thomas

    2011-01-01

    Sub-scale water tests were undertaken to assess the viability of utilizing high resolution, high frame-rate digital video recordings of a liquid rocket engine turbopump axial inducer to characterize cavitation instabilities. These high speed video (HSV) images of various cavitation phenomena, including higher order cavitation, rotating cavitation, alternating blade cavitation, and asymmetric cavitation, as well as non-cavitating flows for comparison, were recorded from various orientations through an acrylic tunnel using one and two cameras at digital recording rates ranging from 6,000 to 15,700 frames per second. The physical characteristics of these cavitation forms, including the mechanisms that define the cavitation frequency, were identified. Additionally, these images showed how the cavitation forms changed and transitioned from one type (tip vortex) to another (sheet cavitation) as the inducer boundary conditions (inlet pressures) were changed. Image processing techniques were developed which tracked the formation and collapse of cavitating fluid in a specified target area, both in the temporal and frequency domains, in order to characterize the cavitation instability frequency. The accuracy of the analysis techniques was found to be very dependent on target size for higher order cavitation, but much less so for the other phenomena. Tunnel-mounted piezoelectric, dynamic pressure transducers were present throughout these tests and were used as references in correlating the results obtained by image processing. Results showed good agreement between image processing and dynamic pressure spectral data. The test set-up, test program, and test results including H-Q and suction performance, dynamic environment and cavitation characterization, and image processing techniques and results will be discussed.

  19. Development Of A Dynamic Radiographic Capability Using High-Speed Video

    NASA Astrophysics Data System (ADS)

    Bryant, Lawrence E.

    1985-02-01

    High-speed video equipment can be used to optically image up to 2,000 full frames per second or 12,000 partial frames per second. X-ray image intensifiers have historically been used to image radiographic images at 30 frames per second. By combining these two types of equipment, it is possible to perform dynamic x-ray imaging of up to 2,000 full frames per second. The technique has been demonstrated using conventional, industrial x-ray sources such as 150 Kv and 300 Kv constant potential x-ray generators, 2.5 MeV Van de Graaffs, and linear accelerators. A crude form of this high-speed radiographic imaging has been shown to be possible with a cobalt 60 source. Use of a maximum aperture lens makes best use of the available light output from the image intensifier. The x-ray image intensifier input and output fluors decay rapidly enough to allow the high frame rate imaging. Data are presented on the maximum possible video frame rates versus x-ray penetration of various thicknesses of aluminum and steel. Photographs illustrate typical radiographic setups using the high speed imaging method. Video recordings show several demonstrations of this technique with the played-back x-ray images slowed down up to 100 times as compared to the actual event speed. Typical applications include boiling type action of liquids in metal containers, compressor operation with visualization of crankshaft, connecting rod and piston movement and thermal battery operation. An interesting aspect of this technique combines both the optical and x-ray capabilities to observe an object or event with both external and internal details with one camera in a visual mode and the other camera in an x-ray mode. This allows both kinds of video images to appear side by side in a synchronized presentation.

  20. Video-based eye tracking for neuropsychiatric assessment.

    PubMed

    Adhikari, Sam; Stark, David E

    2017-01-01

    This paper presents a video-based eye-tracking method, ideally deployed via a mobile device or laptop-based webcam, as a tool for measuring brain function. Eye movements and pupillary motility are tightly regulated by brain circuits, are subtly perturbed by many disease states, and are measurable using video-based methods. Quantitative measurement of eye movement by readily available webcams may enable early detection and diagnosis, as well as remote/serial monitoring, of neurological and neuropsychiatric disorders. We successfully extracted computational and semantic features for 14 testing sessions, comprising 42 individual video blocks and approximately 17,000 image frames generated across several days of testing. Here, we demonstrate the feasibility of collecting video-based eye-tracking data from a standard webcam in order to assess psychomotor function. Furthermore, we were able to demonstrate through systematic analysis of this data set that eye-tracking features (in particular, radial and tangential variance on a circular visual-tracking paradigm) predict performance on well-validated psychomotor tests. © 2017 New York Academy of Sciences.

  1. Automated tracking of whiskers in videos of head fixed rodents.

    PubMed

    Clack, Nathan G; O'Connor, Daniel H; Huber, Daniel; Petreanu, Leopoldo; Hires, Andrew; Peron, Simon; Svoboda, Karel; Myers, Eugene W

    2012-01-01

    We have developed software for fully automated tracking of vibrissae (whiskers) in high-speed videos (>500 Hz) of head-fixed, behaving rodents trimmed to a single row of whiskers. Performance was assessed against a manually curated dataset consisting of 1.32 million video frames comprising 4.5 million whisker traces. The current implementation detects whiskers with a recall of 99.998% and identifies individual whiskers with 99.997% accuracy. The average processing rate for these images was 8 Mpx/s/cpu (2.6 GHz Intel Core2, 2 GB RAM). This translates to 35 processed frames per second for a 640 px×352 px video of 4 whiskers. The speed and accuracy achieved enables quantitative behavioral studies where the analysis of millions of video frames is required. We used the software to analyze the evolving whisking strategies as mice learned a whisker-based detection task over the course of 6 days (8148 trials, 25 million frames) and measure the forces at the sensory follicle that most underlie haptic perception.

  2. Automated Tracking of Whiskers in Videos of Head Fixed Rodents

    PubMed Central

    Clack, Nathan G.; O'Connor, Daniel H.; Huber, Daniel; Petreanu, Leopoldo; Hires, Andrew; Peron, Simon; Svoboda, Karel; Myers, Eugene W.

    2012-01-01

    We have developed software for fully automated tracking of vibrissae (whiskers) in high-speed videos (>500 Hz) of head-fixed, behaving rodents trimmed to a single row of whiskers. Performance was assessed against a manually curated dataset consisting of 1.32 million video frames comprising 4.5 million whisker traces. The current implementation detects whiskers with a recall of 99.998% and identifies individual whiskers with 99.997% accuracy. The average processing rate for these images was 8 Mpx/s/cpu (2.6 GHz Intel Core2, 2 GB RAM). This translates to 35 processed frames per second for a 640 px×352 px video of 4 whiskers. The speed and accuracy achieved enables quantitative behavioral studies where the analysis of millions of video frames is required. We used the software to analyze the evolving whisking strategies as mice learned a whisker-based detection task over the course of 6 days (8148 trials, 25 million frames) and measure the forces at the sensory follicle that most underlie haptic perception. PMID:22792058

  3. Survey of Knowledge Representation and Reasoning Systems

    DTIC Science & Technology

    2009-07-01

    processing large volumes of unstructured information such as natural language documents, email, audio , images and video [Ferrucci et al. 2006]. Using this...information we hope to obtain improved es- timation and prediction, data-mining, social network analysis, and semantic search and visualisation . Knowledge

  4. Super-resolution image reconstruction from UAS surveillance video through affine invariant interest point-based motion estimation

    NASA Astrophysics Data System (ADS)

    He, Qiang; Schultz, Richard R.; Wang, Yi; Camargo, Aldo; Martel, Florent

    2008-01-01

    In traditional super-resolution methods, researchers generally assume that accurate subpixel image registration parameters are given a priori. In reality, accurate image registration on a subpixel grid is the single most critically important step for the accuracy of super-resolution image reconstruction. In this paper, we introduce affine invariant features to improve subpixel image registration, which considerably reduces the number of mismatched points and hence makes traditional image registration more efficient and more accurate for super-resolution video enhancement. Affine invariant interest points include those corners that are invariant to affine transformations, including scale, rotation, and translation. They are extracted from the second moment matrix through the integration and differentiation covariance matrices. Our tests are based on two sets of real video captured by a small Unmanned Aircraft System (UAS) aircraft, which is highly susceptible to vibration from even light winds. The experimental results from real UAS surveillance video show that affine invariant interest points are more robust to perspective distortion and present more accurate matching than traditional Harris/SIFT corners. In our experiments on real video, all matching affine invariant interest points are found correctly. In addition, for the same super-resolution problem, we can use many fewer affine invariant points than Harris/SIFT corners to obtain good super-resolution results.

  5. Diffusion in coastal and harbour zones, effects of Waves,Wind and Currents

    NASA Astrophysics Data System (ADS)

    Diez, M.; Redondo, J. M.

    2009-04-01

    As there are multiple processes at different scales that produce turbulent mixing in the ocean, thus giving a large variation of horizontal eddy diffusivities, we use a direct method to evaluate the influence of different ambient parameters such as wave height and wind on coastal dispersion. Measurements of the diffusivity are made by digital processing of images taken from from video recordings of the sea surface near the coast. The use of image analysis allows to estimate both spatial and temporal characteristics of wave fields, surface circulation and mixing in the surf zone, near Wave breakers and inside Harbours. The study of near-shore dispersion [1], with the added complexity of the interaction between wave fields, longshore currents, turbulence and beach morphology, needs detailed measurements of simple mixing processes to compare the respective influences of forcings at different scales. The measurements include simultaneous time series of waves, currents, wind velocities from the studied area. Cuantitative information from the video images is accomplished using the DigImage video processing system [3], and a frame grabber. The video may be controlled by the computer, allowing, remote control of the processing. Spectral analysis on the images has also used n order to estimate dominant wave periods as well as the dispersion relations of dominant instabilities. The measurements presented here consist mostly on the comarison of difussion coeficients measured by evaluating the spread of blobs of dye (milk) as well as by measuring the separation between different buoys released at the same time. We have used a techniques, developed by Bahia(1997), Diez(1998) and Bezerra(2000)[1-3] to study turbulent diffusion by means of digital processing of images taken from remote sensing and video recordings of the sea surface. The use of image analysis allows to measure variations of several decades in horizontal diffusivity values, the comparison of the diffusivities between different sites is not direct and a good understanding of the dominant mixing processes is needed. There is an increase of diffusivity with wave height but only for large Wave Reynolds numbers. Other important factors are wind speed and tidal currents. The horizontal diffusivity shows a marked anisotropy as a function of wave height and distance from the coast. The measurements were performed under a variety of weather conditions conditional sampling has been used to identify the different influences of the environmental agents on the actual effective horizontal diffusion[4]. [1] Bahia E. (1998) "Un estudio numerico experimental de la dispersion de contaminantes en aguas costeras, PhD Tesis UPC, Barcelona. [2] Bezerra M.O., (2000) "Diffusion de contaminantes en la costa. , PhD Tesis Uni. De Barcelona, Barcelona. [3] Diez M. (1998) "Estudio de la Hidrodinamica de la zona de rompientes mediante el analisis digital de imagenes. Master Thesis, UPC, Barcelona. [4] Artale V., Boffetta G., Celani A., Cencini M. and Vulpiani A., 1997, "Dispersion of passive tracers in closed basins: Beyond the diffusion coefficient", Physics of Fluids, vol 9, pp 3162-1997

  6. Optical cell tracking analysis using a straight-forward approach to minimize processing time for high frame rate data

    NASA Astrophysics Data System (ADS)

    Seeto, Wen Jun; Lipke, Elizabeth Ann

    2016-03-01

    Tracking of rolling cells via in vitro experiment is now commonly performed using customized computer programs. In most cases, two critical challenges continue to limit analysis of cell rolling data: long computation times due to the complexity of tracking algorithms and difficulty in accurately correlating a given cell with itself from one frame to the next, which is typically due to errors caused by cells that either come close in proximity to each other or come in contact with each other. In this paper, we have developed a sophisticated, yet simple and highly effective, rolling cell tracking system to address these two critical problems. This optical cell tracking analysis (OCTA) system first employs ImageJ for cell identification in each frame of a cell rolling video. A custom MATLAB code was written to use the geometric and positional information of all cells as the primary parameters for matching each individual cell with itself between consecutive frames and to avoid errors when tracking cells that come within close proximity to one another. Once the cells are matched, rolling velocity can be obtained for further analysis. The use of ImageJ for cell identification eliminates the need for high level MATLAB image processing knowledge. As a result, only fundamental MATLAB syntax is necessary for cell matching. OCTA has been implemented in the tracking of endothelial colony forming cell (ECFC) rolling under shear. The processing time needed to obtain tracked cell data from a 2 min ECFC rolling video recorded at 70 frames per second with a total of over 8000 frames is less than 6 min using a computer with an Intel® Core™ i7 CPU 2.80 GHz (8 CPUs). This cell tracking system benefits cell rolling analysis by substantially reducing the time required for post-acquisition data processing of high frame rate video recordings and preventing tracking errors when individual cells come in close proximity to one another.

  7. Effect of video server topology on contingency capacity requirements

    NASA Astrophysics Data System (ADS)

    Kienzle, Martin G.; Dan, Asit; Sitaram, Dinkar; Tetzlaff, William H.

    1996-03-01

    Video servers need to assign a fixed set of resources to each video stream in order to guarantee on-time delivery of the video data. If a server has insufficient resources to guarantee the delivery, it must reject the stream request rather than slowing down all existing streams. Large scale video servers are being built as clusters of smaller components, so as to be economical, scalable, and highly available. This paper uses a blocking model developed for telephone systems to evaluate video server cluster topologies. The goal is to achieve high utilization of the components and low per-stream cost combined with low blocking probability and high user satisfaction. The analysis shows substantial economies of scale achieved by larger server images. Simple distributed server architectures can result in partitioning of resources with low achievable resource utilization. By comparing achievable resource utilization of partitioned and monolithic servers, we quantify the cost of partitioning. Next, we present an architecture for a distributed server system that avoids resource partitioning and results in highly efficient server clusters. Finally, we show how, in these server clusters, further optimizations can be achieved through caching and batching of video streams.

  8. WCE video segmentation using textons

    NASA Astrophysics Data System (ADS)

    Gallo, Giovanni; Granata, Eliana

    2010-03-01

    Wireless Capsule Endoscopy (WCE) integrates wireless transmission with image and video technology. It has been used to examine the small intestine non invasively. Medical specialists look for signicative events in the WCE video by direct visual inspection manually labelling, in tiring and up to one hour long sessions, clinical relevant frames. This limits the WCE usage. To automatically discriminate digestive organs such as esophagus, stomach, small intestine and colon is of great advantage. In this paper we propose to use textons for the automatic discrimination of abrupt changes within a video. In particular, we consider, as features, for each frame hue, saturation, value, high-frequency energy content and the responses to a bank of Gabor filters. The experiments have been conducted on ten video segments extracted from WCE videos, in which the signicative events have been previously labelled by experts. Results have shown that the proposed method may eliminate up to 70% of the frames from further investigations. The direct analysis of the doctors may hence be concentrated only on eventful frames. A graphical tool showing sudden changes in the textons frequencies for each frame is also proposed as a visual aid to find clinically relevant segments of the video.

  9. Impact of Substance Messages in Music Videos on Youth: Beware the Influence of Connectedness and Its Potential Prevention-Shielding Effect.

    PubMed

    Russell, Cristel Antonia; Régnier-Denois, Véronique; Chapoton, Boris; Buhrau, Denise

    2017-09-01

    Two studies were conducted to investigate the role of connectedness with music videos in affecting youths' beliefs about substances (alcohol and tobacco) embedded therein and the potential for a prevention message to limit the impact of these images. The first study used cross-sectional data from a national sample of 1,023 adolescents (54.3% male) to evaluate the relationship between youths' consumption of music videos and their beliefs about the consequences of consuming alcohol and tobacco. A controlled experiment with 151 participants (57% male) then tested whether exposure to smoking in a video affects youths' smoking beliefs and the preventive potential of a pre-video warning. Connectedness to music videos, not overall amount of viewing, is the main correlate of beliefs about the positive outcomes of consuming alcohol/tobacco. A single exposure to a music video with smoking images can increase beliefs that smoking leads to positive consequences, and connected viewers are especially receptive to these images. Alerting youths to the presence of substance messages in a video leads to differential results as a function of connectedness. Many youths spend hours every day watching music videos in which positive visuals about drinking and smoking abound. Rather than the quantity of viewing, it is the degree to which youths immerse themselves in these music videos that enhances their beliefs that smoking and drinking have positive consequences. Interventions that warn youths about the presence of substances in music videos can minimize their influence, but youths highly connected with the music video content are especially resistant to warnings.

  10. Video image processing

    NASA Technical Reports Server (NTRS)

    Murray, N. D.

    1985-01-01

    Current technology projections indicate a lack of availability of special purpose computing for Space Station applications. Potential functions for video image special purpose processing are being investigated, such as smoothing, enhancement, restoration and filtering, data compression, feature extraction, object detection and identification, pixel interpolation/extrapolation, spectral estimation and factorization, and vision synthesis. Also, architectural approaches are being identified and a conceptual design generated. Computationally simple algorithms will be research and their image/vision effectiveness determined. Suitable algorithms will be implimented into an overall architectural approach that will provide image/vision processing at video rates that are flexible, selectable, and programmable. Information is given in the form of charts, diagrams and outlines.

  11. Neutrons Image Additive Manufactured Turbine Blade in 3-D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2016-04-29

    The video displays the Inconel 718 Turbine Blade made by Additive Manufacturing. First a gray scale neutron computed tomogram (CT) is displayed with transparency in order to show the internal structure. Then the neutron CT is overlapped with the engineering drawing that was used to print the part and a comparison of external and internal structures is possible. This provides a map of the accuracy of the printed turbine (printing tolerance). Internal surface roughness can also be observed. Credits: Experimental Measurements: Hassina Z. Bilheaux, Video and Printing Tolerance Analysis: Jean C. Bilheaux

  12. Short-term change detection for UAV video

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang

    2012-11-01

    In the last years, there has been an increased use of unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. An important application in this context is change detection in UAV video data. Here we address short-term change detection, in which the time between observations ranges from several minutes to a few hours. We distinguish this task from video motion detection (shorter time scale) and from long-term change detection, based on time series of still images taken between several days, weeks, or even years. Examples for relevant changes we are looking for are recently parked or moved vehicles. As a pre-requisite, a precise image-to-image registration is needed. Images are selected on the basis of the geo-coordinates of the sensor's footprint and with respect to a certain minimal overlap. The automatic imagebased fine-registration adjusts the image pair to a common geometry by using a robust matching approach to handle outliers. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed length of shadows, and compression or transmission artifacts. To detect changes in image pairs we analyzed image differencing, local image correlation, and a transformation-based approach (multivariate alteration detection). As input we used color and gradient magnitude images. To cope with local misalignment of image structures we extended the approaches by a local neighborhood search. The algorithms are applied to several examples covering both urban and rural scenes. The local neighborhood search in combination with intensity and gradient magnitude differencing clearly improved the results. Extended image differencing performed better than both the correlation based approach and the multivariate alternation detection. The algorithms are adapted to be used in semi-automatic workflows for the ABUL video exploitation system of Fraunhofer IOSB, see Heinze et. al. 2010.1 In a further step we plan to incorporate more information from the video sequences to the change detection input images, e.g., by image enhancement or by along-track stereo which are available in the ABUL system.

  13. Design of UAV high resolution image transmission system

    NASA Astrophysics Data System (ADS)

    Gao, Qiang; Ji, Ming; Pang, Lan; Jiang, Wen-tao; Fan, Pengcheng; Zhang, Xingcheng

    2017-02-01

    In order to solve the problem of the bandwidth limitation of the image transmission system on UAV, a scheme with image compression technology for mini UAV is proposed, based on the requirements of High-definition image transmission system of UAV. The video codec standard H.264 coding module and key technology was analyzed and studied for UAV area video communication. Based on the research of high-resolution image encoding and decoding technique and wireless transmit method, The high-resolution image transmission system was designed on architecture of Android and video codec chip; the constructed system was confirmed by experimentation in laboratory, the bit-rate could be controlled easily, QoS is stable, the low latency could meets most applied requirement not only for military use but also for industrial applications.

  14. Ultrasound Imaging System Video

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In this video, astronaut Peggy Whitson uses the Human Research Facility (HRF) Ultrasound Imaging System in the Destiny Laboratory of the International Space Station (ISS) to image her own heart. The Ultrasound Imaging System provides three-dimension image enlargement of the heart and other organs, muscles, and blood vessels. It is capable of high resolution imaging in a wide range of applications, both research and diagnostic, such as Echocardiography (ultrasound of the heart), abdominal, vascular, gynecological, muscle, tendon, and transcranial ultrasound.

  15. KENNEDY SPACE CENTER, FLA. - Armando Oliu, Final Inspection Team lead for the Shuttle program, speaks to reporters about the aid the Image Analysis Lab is giving the FBI in a kidnapping case. Oliu oversees the image lab that is using an advanced SGI® TP9500 data management system to review the tape of the kidnapping in progress in Sarasota, Fla. KSC installed the new $3.2 million system in preparation for Return to Flight of the Space Shuttle fleet. The lab is studying the Sarasota kidnapping video to provide any new information possible to law enforcement officers. KSC is joining NASA’s Marshall Space Flight Center in Alabama in reviewing the tape.

    NASA Image and Video Library

    2004-02-04

    KENNEDY SPACE CENTER, FLA. - Armando Oliu, Final Inspection Team lead for the Shuttle program, speaks to reporters about the aid the Image Analysis Lab is giving the FBI in a kidnapping case. Oliu oversees the image lab that is using an advanced SGI® TP9500 data management system to review the tape of the kidnapping in progress in Sarasota, Fla. KSC installed the new $3.2 million system in preparation for Return to Flight of the Space Shuttle fleet. The lab is studying the Sarasota kidnapping video to provide any new information possible to law enforcement officers. KSC is joining NASA’s Marshall Space Flight Center in Alabama in reviewing the tape.

  16. Measurement of marine picoplankton cell size by using a cooled, charge-coupled device camera with image-analyzed fluorescence microscopy.

    PubMed Central

    Viles, C L; Sieracki, M E

    1992-01-01

    Accurate measurement of the biomass and size distribution of picoplankton cells (0.2 to 2.0 microns) is paramount in characterizing their contribution to the oceanic food web and global biogeochemical cycling. Image-analyzed fluorescence microscopy, usually based on video camera technology, allows detailed measurements of individual cells to be taken. The application of an imaging system employing a cooled, slow-scan charge-coupled device (CCD) camera to automated counting and sizing of individual picoplankton cells from natural marine samples is described. A slow-scan CCD-based camera was compared to a video camera and was superior for detecting and sizing very small, dim particles such as fluorochrome-stained bacteria. Several edge detection methods for accurately measuring picoplankton cells were evaluated. Standard fluorescent microspheres and a Sargasso Sea surface water picoplankton population were used in the evaluation. Global thresholding was inappropriate for these samples. Methods used previously in image analysis of nanoplankton cells (2 to 20 microns) also did not work well with the smaller picoplankton cells. A method combining an edge detector and an adaptive edge strength operator worked best for rapidly generating accurate cell sizes. A complete sample analysis of more than 1,000 cells averages about 50 min and yields size, shape, and fluorescence data for each cell. With this system, the entire size range of picoplankton can be counted and measured. Images PMID:1610183

  17. Biased lineup instructions and face identification from video images.

    PubMed

    Thompson, W Burt; Johnson, Jaime

    2008-01-01

    Previous eyewitness memory research has shown that biased lineup instructions reduce identification accuracy, primarily by increasing false-positive identifications in target-absent lineups. Because some attempts at identification do not rely on a witness's memory of the perpetrator but instead involve matching photos to images on surveillance video, the authors investigated the effects of biased instructions on identification accuracy in a matching task. In Experiment 1, biased instructions did not affect the overall accuracy of participants who used video images as an identification aid, but nearly all correct decisions occurred with target-present photo spreads. Both biased and unbiased instructions resulted in high false-positive rates. In Experiment 2, which focused on video-photo matching accuracy with target-absent photo spreads, unbiased instructions led to more correct responses (i.e., fewer false positives). These findings suggest that investigators should not relax precautions against biased instructions when people attempt to match photos to an unfamiliar person recorded on video.

  18. Analysis of brook trout spatial behavior during passage attempts in corrugated culverts using near-infrared illumination video imagery

    USGS Publications Warehouse

    Bergeron, Normand E.; Constantin, Pierre-Marc; Goerig, Elsa; Castro-Santos, Theodore R.

    2016-01-01

    We used video recording and near-infrared illumination to document the spatial behavior of brook trout of various sizes attempting to pass corrugated culverts under different hydraulic conditions. Semi-automated image analysis was used to digitize fish position at high temporal resolution inside the culvert, which allowed calculation of various spatial behavior metrics, including instantaneous ground and swimming speed, path complexity, distance from side walls, velocity preference ratio (mean velocity at fish lateral position/mean crosssectional velocity) as well as number and duration of stops in forward progression. The presentation summarizes the main results and discusses how they could be used to improve fish passage performance in culverts.

  19. Image/video understanding systems based on network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-03-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.

  20. Video scrambling for privacy protection in video surveillance: recent results and validation framework

    NASA Astrophysics Data System (ADS)

    Dufaux, Frederic

    2011-06-01

    The issue of privacy in video surveillance has drawn a lot of interest lately. However, thorough performance analysis and validation is still lacking, especially regarding the fulfillment of privacy-related requirements. In this paper, we first review recent Privacy Enabling Technologies (PET). Next, we discuss pertinent evaluation criteria for effective privacy protection. We then put forward a framework to assess the capacity of PET solutions to hide distinguishing facial information and to conceal identity. We conduct comprehensive and rigorous experiments to evaluate the performance of face recognition algorithms applied to images altered by PET. Results show the ineffectiveness of naïve PET such as pixelization and blur. Conversely, they demonstrate the effectiveness of more sophisticated scrambling techniques to foil face recognition.

  1. Formulating an image matching strategy for terrestrial stem data collection using a multisensor video system

    Treesearch

    Neil A. Clark

    2001-01-01

    A multisensor video system has been developed incorporating a CCD video camera, a 3-axis magnetometer, and a laser-rangefinding device, for the purpose of measuring individual tree stems. While preliminary results show promise, some changes are needed to improve the accuracy and efficiency of the system. Image matching is needed to improve the accuracy of length...

  2. Subxiphoid uniportal video-assisted thoracoscopic surgery for synchronous bilateral lung resection.

    PubMed

    Yang, Xueying; Wang, Linlin

    2018-01-01

    With advancements in medical imaging and current emphasis on regular physical examinations, multiple pulmonary lesions increasingly are being detected, including bilateral pulmonary lesions. Video-assisted thoracic surgery is an important method for treating such lesions. Most of video-assisted thoracic surgeries for bilateral pulmonary lesions were two separate operations. Herein, we report a novel technique of synchronous subxiphoid uniportal video-assisted thoracic surgery for bilateral pulmonary lesions. Synchronous bilateral lung resection procedures were performed through a single incision (~4 cm, subxiphoid). This technique was used successfully in 11 patients with bilateral pulmonary lesions. There were no intraoperative deaths or mortality recorded at 30 days. Our results show that the subxiphoid uniportal thoracoscopic procedure is a safe and feasible surgical procedure for synchronous bilateral lung resection with less surgical trauma, postoperative pain and better cosmetic results in qualifying patients. Further analysis is ongoing, involving a larger number of subjects.

  3. Extraction of Blebs in Human Embryonic Stem Cell Videos.

    PubMed

    Guan, Benjamin X; Bhanu, Bir; Talbot, Prue; Weng, Nikki Jo-Hao

    2016-01-01

    Blebbing is an important biological indicator in determining the health of human embryonic stem cells (hESC). Especially, areas of a bleb sequence in a video are often used to distinguish two cell blebbing behaviors in hESC: dynamic and apoptotic blebbings. This paper analyzes various segmentation methods for bleb extraction in hESC videos and introduces a bio-inspired score function to improve the performance in bleb extraction. Full bleb formation consists of bleb expansion and retraction. Blebs change their size and image properties dynamically in both processes and between frames. Therefore, adaptive parameters are needed for each segmentation method. A score function derived from the change of bleb area and orientation between consecutive frames is proposed which provides adaptive parameters for bleb extraction in videos. In comparison to manual analysis, the proposed method provides an automated fast and accurate approach for bleb sequence extraction.

  4. Video change detection for fixed wing UAVs

    NASA Astrophysics Data System (ADS)

    Bartelsen, Jan; Müller, Thomas; Ring, Jochen; Mück, Klaus; Brüstle, Stefan; Erdnüß, Bastian; Lutz, Bastian; Herbst, Theresa

    2017-10-01

    In this paper we proceed the work of Bartelsen et al.1 We present the draft of a process chain for an image based change detection which is designed for videos acquired by fixed wing unmanned aerial vehicles (UAVs). From our point of view, automatic video change detection for aerial images can be useful to recognize functional activities which are typically caused by the deployment of improvised explosive devices (IEDs), e.g. excavations, skid marks, footprints, left-behind tooling equipment, and marker stones. Furthermore, in case of natural disasters, like flooding, imminent danger can be recognized quickly. Due to the necessary flight range, we concentrate on fixed wing UAVs. Automatic change detection can be reduced to a comparatively simple photogrammetric problem when the perspective change between the "before" and "after" image sets is kept as small as possible. Therefore, the aerial image acquisition demands a mission planning with a clear purpose including flight path and sensor configuration. While the latter can be enabled simply by a fixed and meaningful adjustment of the camera, ensuring a small perspective change for "before" and "after" videos acquired by fixed wing UAVs is a challenging problem. Concerning this matter, we have performed tests with an advanced commercial off the shelf (COTS) system which comprises a differential GPS and autopilot system estimating the repetition accuracy of its trajectory. Although several similar approaches have been presented,23 as far as we are able to judge, the limits for this important issue are not estimated so far. Furthermore, we design a process chain to enable the practical utilization of video change detection. It consists of a front-end of a database to handle large amounts of video data, an image processing and change detection implementation, and the visualization of the results. We apply our process chain on the real video data acquired by the advanced COTS fixed wing UAV and synthetic data. For the image processing and change detection, we use the approach of Muller.4 Although it was developed for unmanned ground vehicles (UGVs), it enables a near real time video change detection for aerial videos. Concluding, we discuss the demands on sensor systems in the matter of change detection.

  5. Mosaicking Techniques for Deep Submergence Vehicle Video Imagery - Applications to Ridge2000 Science

    NASA Astrophysics Data System (ADS)

    Mayer, L.; Rzhanov, Y.; Fornari, D. J.; Soule, A.; Shank, T. M.; Beaulieu, S. E.; Schouten, H.; Tivey, M.

    2004-12-01

    Severe attenuation of visible light and limited power capabilities of many submersible vehicles require acquisition of imagery from short ranges, rarely exceeding 8-10 meters. Although modern video- and photo-equipment makes high-resolution video surveying possible, the field of view of each image remains relatively narrow. To compensate for the deficiencies in light and field of view researchers have been developing techniques allowing for combining images into larger composite images i.e., mosaicking. A properly constructed, accurate mosaic has a number of well-known advantages in comparison with the original sequence of images, the most notable being improved situational awareness. We have developed software strategies for PC-based computers that permit conversion of video imagery acquired from any underwater vehicle, operated within both absolute (e.g. LBL or USBL) or relative (e.g. Doppler Velocity Log-DVL) navigation networks, to quickly produce a set of geo-referenced photomosaics which can then be directly incorporated into a Geographic Information System (GIS) data base. The timescale of processing is rapid enough to permit analysis of the resulting mosaics between submersible dives thus enhancing the efficiency of deep-sea research. Commercial imaging processing packages usually handle cases where there is no or little parallax - an unlikely situation for undersea world where terrain has pronounced 3D content and imagery is acquired from moving platforms. The approach we have taken is optimized for situations in which there is significant relief and thus parallax in the imagery (e.g. seafloor fault scarps or constructional volcanic escarpments and flow fronts). The basis of all mosaicking techniques is a pair-wise image registration method that finds a transformation relating pixels of two consecutive image frames. We utilize a "rigid affine model" with four degrees of freedom for image registration that allows for camera translation in all directions and camera rotation about its optical axis. The coefficients of the transformation can be determined robustly using the well-established and powerful "featureless Fourier domain-based technique" (FFDT), which is an extension of the FFT-based correlation approach. While calculation of cross-correlation allows the recovery of only two parameters of the transformation (translation in 2D), FFDT uses the "Phase shift" theorem of the Fourier Transform as well as a log-polar transform of the Fourier magnitude spectrum to recover all four transformation coefficients required for the rigid affine model. Examples of results of our video mosaicking data processing for the East Pacific Rise ISS will be presented.

  6. ciliaFA: a research tool for automated, high-throughput measurement of ciliary beat frequency using freely available software

    PubMed Central

    2012-01-01

    Background Analysis of ciliary function for assessment of patients suspected of primary ciliary dyskinesia (PCD) and for research studies of respiratory and ependymal cilia requires assessment of both ciliary beat pattern and beat frequency. While direct measurement of beat frequency from high-speed video recordings is the most accurate and reproducible technique it is extremely time consuming. The aim of this study was to develop a freely available automated method of ciliary beat frequency analysis from digital video (AVI) files that runs on open-source software (ImageJ) coupled to Microsoft Excel, and to validate this by comparison to the direct measuring high-speed video recordings of respiratory and ependymal cilia. These models allowed comparison to cilia beating between 3 and 52 Hz. Methods Digital video files of motile ciliated ependymal (frequency range 34 to 52 Hz) and respiratory epithelial cells (frequency 3 to 18 Hz) were captured using a high-speed digital video recorder. To cover the range above between 18 and 37 Hz the frequency of ependymal cilia were slowed by the addition of the pneumococcal toxin pneumolysin. Measurements made directly by timing a given number of individual ciliary beat cycles were compared with those obtained using the automated ciliaFA system. Results The overall mean difference (± SD) between the ciliaFA and direct measurement high-speed digital imaging methods was −0.05 ± 1.25 Hz, the correlation coefficient was shown to be 0.991 and the Bland-Altman limits of agreement were from −1.99 to 1.49 Hz for respiratory and from −2.55 to 3.25 Hz for ependymal cilia. Conclusions A plugin for ImageJ was developed that extracts pixel intensities and performs fast Fourier transformation (FFT) using Microsoft Excel. The ciliaFA software allowed automated, high throughput measurement of respiratory and ependymal ciliary beat frequency (range 3 to 52 Hz) and avoids operator error due to selection bias. We have included free access to the ciliaFA plugin and installation instructions in Additional file 1 accompanying this manuscript that other researchers may use. PMID:23351276

  7. Computer Image Analysis of Histochemically-Labeled Acetylcholinesterase.

    DTIC Science & Technology

    1984-11-30

    image analysis on conjunction with histochemical techniques to describe the distribution of acetylcholinesterase (AChE) activity in nervous and muscular tissue in rats treated with organophosphates (OPs). The objective of the first year of work on this remaining 2 years. We began by adopting a version of the AChE staining method as modified by Hanker, which consistent with the optical properties of our video system. We wrote computer programs for provide a numeric quantity which represents the degree of staining in a tissue section. The staining was calibrated by

  8. Analysis of ERTS imagery using special electronic viewing/measuring equipment

    NASA Technical Reports Server (NTRS)

    Evans, W. E.; Serebreny, S. M.

    1973-01-01

    An electronic satellite image analysis console (ESIAC) is being employed to process imagery for use by USGS investigators in several different disciplines studying dynamic hydrologic conditions. The ESIAC provides facilities for storing registered image sequences in a magnetic video disc memory for subsequent recall, enhancement, and animated display in monochrome or color. Quantitative measurements of distances, areas, and brightness profiles can be extracted digitally under operator supervision. Initial results are presented for the display and measurement of snowfield extent, glacier development, sediment plumes from estuary discharge, playa inventory, phreatophyte and other vegetative changes.

  9. Evaluation of Suppression of Hydroprocessed Renewable Jet (HRJ) Fuel Fires with Aqueous Film Forming Foam (AFFF)

    DTIC Science & Technology

    2011-07-01

    cameras were installed around the test pan and an underwater GoPro ® video camera recorded the fire from below the layer of fuel. 3.2.2. Camera Images...Distribution A: Approved for public release; distribution unlimited. 3.2.3. Video Images A GoPro video camera with a wide angle lens recorded the tests...camera and the GoPro ® video camera were not used for fire suppression experiments. 3.3.2. Test Pans Two ¼-in thick stainless steel test pans were

  10. [Mobile hospital -real time mobile telehealthcare system with ultrasound and CT van using high-speed satellite communication-].

    PubMed

    Takizawa, Masaomi; Miyashita, Toyohisa; Murase, Sumio; Kanda, Hirohito; Karaki, Yoshiaki; Yagi, Kazuo; Ohue, Toru

    2003-01-01

    A real-time telescreening system is developed to detect early diseases for rural area residents using two types of mobile vans with a portable satellite station. The system consists of a satellite communication system with 1.5Mbps of the JCSAT-1B satellite, a spiral CT van, an ultrasound imaging van with two video conference system, a DICOM server and a multicast communication unit. The video image and examination image data are transmitted from the van to hospitals and the university simultaneously. Physician in the hospital observes and interprets exam images from the van and watches the video images of the position of ultrasound transducer on screenee in the van. After the observation images, physician explains a results of the examination by the video conference system. Seventy lung CT screening and 203 ultrasound screening were done from March to June 2002. The trial of this real time screening suggested that rural residents are given better healthcare without visit to the hospital. And it will open the gateway to reduce the medical cost and medical divide between city area and rural area.

  11. Vehicle counting system using real-time video processing

    NASA Astrophysics Data System (ADS)

    Crisóstomo-Romero, Pedro M.

    2006-02-01

    Transit studies are important for planning a road network with optimal vehicular flow. A vehicular count is essential. This article presents a vehicle counting system based on video processing. An advantage of such system is the greater detail than is possible to obtain, like shape, size and speed of vehicles. The system uses a video camera placed above the street to image transit in real-time. The video camera must be placed at least 6 meters above the street level to achieve proper acquisition quality. Fast image processing algorithms and small image dimensions are used to allow real-time processing. Digital filters, mathematical morphology, segmentation and other techniques allow identifying and counting all vehicles in the image sequences. The system was implemented under Linux in a 1.8 GHz Pentium 4 computer. A successful count was obtained with frame rates of 15 frames per second for images of size 240x180 pixels and 24 frames per second for images of size 180x120 pixels, thus being able to count vehicles whose speeds do not exceed 150 km/h.

  12. Images created in a model eye during simulated cataract surgery can be the basis for images perceived by patients during cataract surgery

    PubMed Central

    Inoue, M; Uchida, A; Shinoda, K; Taira, Y; Noda, T; Ohnuma, K; Bissen-Miyajima, H; Hirakata, A

    2014-01-01

    Purpose To evaluate the images created in a model eye during simulated cataract surgery. Patients and methods This study was conducted as a laboratory investigation and interventional case series. An artificial opaque lens, a clear intraocular lens (IOL), or an irrigation/aspiration (I/A) tip was inserted into the ‘anterior chamber' of a model eye with the frosted posterior surface corresponding to the retina. Video images were recorded of the posterior surface of the model eye from the rear during simulated cataract surgery. The video clips were shown to 20 patients before cataract surgery, and the similarity of their visual perceptions to these images was evaluated postoperatively. Results The images of the moving lens fragments and I/A tip and the insertion of the IOL were seen from the rear. The image through the opaque lens and the IOL without moving objects was the light of the surgical microscope from the rear. However, when the microscope light was turned off after IOL insertion, the images of the microscope and operating room were observed by the room illumination from the rear. Seventy percent of the patients answered that the visual perceptions of moving lens fragments were similar to the video clips and 55% reported similarity with the IOL insertion. Eighty percent of the patients recommended that patients watch the video clip before their scheduled cataract surgery. Conclusions The patients' visual perceptions during cataract surgery can be reproduced in the model eye. Watching the video images preoperatively may help relax the patients during surgery. PMID:24788007

  13. Improvements to video imaging detection for dilemma zone protection.

    DOT National Transportation Integrated Search

    2009-02-01

    The use of video imaging vehicle detection systems (VIVDS) at signalized intersections in Texas has : increased significantly due primarily to safety issues and costs. Installing non-intrusive detectors at : intersections is almost always safer than ...

  14. Internet Teleprescence by Real-Time View-Dependent Image Generation with Omnidirectional Video Camera

    NASA Astrophysics Data System (ADS)

    Morita, Shinji; Yamazawa, Kazumasa; Yokoya, Naokazu

    2003-01-01

    This paper describes a new networked telepresence system which realizes virtual tours into a visualized dynamic real world without significant time delay. Our system is realized by the following three steps: (1) video-rate omnidirectional image acquisition, (2) transportation of an omnidirectional video stream via internet, and (3) real-time view-dependent perspective image generation from the omnidirectional video stream. Our system is applicable to real-time telepresence in the situation where the real world to be seen is far from an observation site, because the time delay from the change of user"s viewing direction to the change of displayed image is small and does not depend on the actual distance between both sites. Moreover, multiple users can look around from a single viewpoint in a visualized dynamic real world in different directions at the same time. In experiments, we have proved that the proposed system is useful for internet telepresence.

  15. Transfer Error and Correction Approach in Mobile Network

    NASA Astrophysics Data System (ADS)

    Xiao-kai, Wu; Yong-jin, Shi; Da-jin, Chen; Bing-he, Ma; Qi-li, Zhou

    With the development of information technology and social progress, human demand for information has become increasingly diverse, wherever and whenever people want to be able to easily, quickly and flexibly via voice, data, images and video and other means to communicate. Visual information to the people direct and vivid image, image / video transmission also been widespread attention. Although the third generation mobile communication systems and the emergence and rapid development of IP networks, making video communications is becoming the main business of the wireless communications, however, the actual wireless and IP channel will lead to error generation, such as: wireless channel multi- fading channels generated error and blocking IP packet loss and so on. Due to channel bandwidth limitations, the video communication compression coding of data is often beyond the data, and compress data after the error is very sensitive to error conditions caused a serious decline in image quality.

  16. Track and track-side video survey technology development.

    DOT National Transportation Integrated Search

    2015-05-01

    Researchers at HiDef/Createc have completed prototype development and testing of a novel track video surveying technology : called Track and Track-Side Video Survey (TTVS). TTVS is designed to capture clear video images of the track and track side : ...

  17. A design of real time image capturing and processing system using Texas Instrument's processor

    NASA Astrophysics Data System (ADS)

    Wee, Toon-Joo; Chaisorn, Lekha; Rahardja, Susanto; Gan, Woon-Seng

    2007-09-01

    In this work, we developed and implemented an image capturing and processing system that equipped with capability of capturing images from an input video in real time. The input video can be a video from a PC, video camcorder or DVD player. We developed two modes of operation in the system. In the first mode, an input image from the PC is processed on the processing board (development platform with a digital signal processor) and is displayed on the PC. In the second mode, current captured image from the video camcorder (or from DVD player) is processed on the board but is displayed on the LCD monitor. The major difference between our system and other existing conventional systems is that image-processing functions are performed on the board instead of the PC (so that the functions can be used for further developments on the board). The user can control the operations of the board through the Graphic User Interface (GUI) provided on the PC. In order to have a smooth image data transfer between the PC and the board, we employed Real Time Data Transfer (RTDX TM) technology to create a link between them. For image processing functions, we developed three main groups of function: (1) Point Processing; (2) Filtering and; (3) 'Others'. Point Processing includes rotation, negation and mirroring. Filter category provides median, adaptive, smooth and sharpen filtering in the time domain. In 'Others' category, auto-contrast adjustment, edge detection, segmentation and sepia color are provided, these functions either add effect on the image or enhance the image. We have developed and implemented our system using C/C# programming language on TMS320DM642 (or DM642) board from Texas Instruments (TI). The system was showcased in College of Engineering (CoE) exhibition 2006 at Nanyang Technological University (NTU) and have more than 40 users tried our system. It is demonstrated that our system is adequate for real time image capturing. Our system can be used or applied for applications such as medical imaging, video surveillance, etc.

  18. Video transmission on ATM networks. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chen, Yun-Chung

    1993-01-01

    The broadband integrated services digital network (B-ISDN) is expected to provide high-speed and flexible multimedia applications. Multimedia includes data, graphics, image, voice, and video. Asynchronous transfer mode (ATM) is the adopted transport techniques for B-ISDN and has the potential for providing a more efficient and integrated environment for multimedia. It is believed that most broadband applications will make heavy use of visual information. The prospect of wide spread use of image and video communication has led to interest in coding algorithms for reducing bandwidth requirements and improving image quality. The major results of a study on the bridging of network transmission performance and video coding are: Using two representative video sequences, several video source models are developed. The fitness of these models are validated through the use of statistical tests and network queuing performance. A dual leaky bucket algorithm is proposed as an effective network policing function. The concept of the dual leaky bucket algorithm can be applied to a prioritized coding approach to achieve transmission efficiency. A mapping of the performance/control parameters at the network level into equivalent parameters at the video coding level is developed. Based on that, a complete set of principles for the design of video codecs for network transmission is proposed.

  19. CALIPSO: an interactive image analysis software package for desktop PACS workstations

    NASA Astrophysics Data System (ADS)

    Ratib, Osman M.; Huang, H. K.

    1990-07-01

    The purpose of this project is to develop a low cost workstation for quantitative analysis of multimodality images using a Macintosh II personal computer. In the current configuration the Macintosh operates as a stand alone workstation where images are imported either from a central PACS server through a standard Ethernet network or recorded through video digitizer board. The CALIPSO software developed contains a large variety ofbasic image display and manipulation tools. We focused our effort however on the design and implementation ofquantitative analysis methods that can be applied to images from different imaging modalities. Analysis modules currently implemented include geometric and densitometric volumes and ejection fraction calculation from radionuclide and cine-angiograms Fourier analysis ofcardiac wall motion vascular stenosis measurement color coded parametric display of regional flow distribution from dynamic coronary angiograms automatic analysis ofmyocardial distribution ofradiolabelled tracers from tomoscintigraphic images. Several of these analysis tools were selected because they use similar color coded andparametric display methods to communicate quantitative data extracted from the images. 1. Rationale and objectives of the project Developments of Picture Archiving and Communication Systems (PACS) in clinical environment allow physicians and radiologists to assess radiographic images directly through imaging workstations (''). This convenient access to the images is often limited by the number of workstations available due in part to their high cost. There is also an increasing need for quantitative analysis ofthe images. During thepast decade

  20. Assessing the Content of YouTube Videos in Educating Patients Regarding Common Imaging Examinations.

    PubMed

    Rosenkrantz, Andrew B; Won, Eugene; Doshi, Ankur M

    2016-12-01

    To assess the content of currently available YouTube videos seeking to educate patients regarding commonly performed imaging examinations. After initial testing of possible search terms, the first two pages of YouTube search results for "CT scan," "MRI," "ultrasound patient," "PET scan," and "mammogram" were reviewed to identify educational patient videos created by health organizations. Sixty-three included videos were viewed and assessed for a range of features. Average views per video were highest for MRI (293,362) and mammography (151,664). Twenty-seven percent of videos used a nontraditional format (eg, animation, song, humor). All videos (100.0%) depicted a patient undergoing the examination, 84.1% a technologist, and 20.6% a radiologist; 69.8% mentioned examination lengths, 65.1% potential pain/discomfort, 41.3% potential radiation, 36.5% a radiology report/results, 27.0% the radiologist's role in interpretation, and 13.3% laboratory work. For CT, 68.8% mentioned intravenous contrast and 37.5% mentioned contrast safety. For MRI, 93.8% mentioned claustrophobia, 87.5% noise, 75.0% need to sit still, 68.8% metal safety, 50.0% intravenous contrast, and 0.0% contrast safety. For ultrasound, 85.7% mentioned use of gel. For PET, 92.3% mentioned radiotracer injection, 61.5% fasting, and 46.2% diabetic precautions. For mammography, unrobing, avoiding deodorant, and possible additional images were all mentioned by 63.6%; dense breasts were mentioned by 0.0%. Educational patient videos on YouTube regarding common imaging examinations received high public interest and may provide a valuable patient resource. Videos most consistently provided information detailing the examination experience and less consistently provided safety information or described the presence and role of the radiologist. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  1. High-speed video analysis of forward and backward spattered blood droplets.

    PubMed

    Comiskey, P M; Yarin, A L; Attinger, D

    2017-07-01

    High-speed videos of blood spatter due to a gunshot taken by the Ames Laboratory Midwest Forensics Resource Center (MFRC) [1] are analyzed. The videos used in this analysis were focused on a variety of targets hit by a bullet which caused either forward, backward, or both types of blood spatter. The analysis process utilized particle image velocimetry (PIV) and particle analysis software to measure drop velocities as well as the distributions of the number of droplets and their respective side view area. The results of this analysis revealed that the maximal velocity in the forward spatter can be about 47±5m/s and for the backward spatter - about 24±8m/s. Moreover, our measurements indicate that the number of droplets produced is larger in forward spatter than it is in backward spatter. In the forward and backward spatter the droplet area in the side-view images is approximately the same. The upper angles of the close-to-cone domain in which droplets are issued in forward and backward spatter are, 27±9° and 57±7°, respectively, whereas the lower angles of the close-to-cone domain are 28±12° and 30±18°, respectively. The inclination angle of the bullet as it penetrates the target is seen to play a large role in the directional preference of the spattered blood. Also, muzzle gases, bullet impact angle, as well as the aerodynamic wake of the bullet are seen to greatly influence the flight of the droplets. The intent of this investigation is to provide a quantitative basis for current and future research on bloodstain pattern analysis (BPA) of either forward or backward blood spatter due to a gunshot. Published by Elsevier B.V.

  2. An Adaptive Inpainting Algorithm Based on DCT Induced Wavelet Regularization

    DTIC Science & Technology

    2013-01-01

    research in image processing. Applications of image inpainting include old films restoration, video inpainting [4], de -interlacing of video sequences...show 5 (a) (b) (c) (d) (e) (f) Fig. 1. Performance of various inpainting algorithms for a cartoon image with text. (a) the original test image; (b...the test image with text; inpainted images by (c) SF (PSNR=37.38 dB); (d) SF-LDCT (PSNR=37.37 dB); (e) MCA (PSNR=37.04 dB); and (f) the proposed

  3. Interactive image analysis system to determine the motility and velocity of cyanobacterial filaments.

    PubMed

    Häder, D P; Vogel, K

    1991-01-01

    An interactive image analysis system has been developed to analyse and quantify the percentage of motile filaments and the individual linear velocities of organisms. The technique is based on the "difference" image between two digitized images taken from a time-lapse video recording 80 s apart which is overlaid on the first image. The bright lines in the difference image represent the paths along which the filaments have moved and are measured using a crosshair cursor controlled by the mouse. Even short exposure to solar ultraviolet radiation strongly impairs the motility of the gliding cyanobacterium Phormidium uncinatum, while its velocity is not likewise affected. These effects are not due to either type I (free radical formation) or type II (singlet oxygen production) photodynamic reactions, since specific quenchers and scavengers, indicative of these reactions, failed to be effective.

  4. Meteor tracking via local pattern clustering in spatio-temporal domain

    NASA Astrophysics Data System (ADS)

    Kukal, Jaromír.; Klimt, Martin; Švihlík, Jan; Fliegel, Karel

    2016-09-01

    Reliable meteor detection is one of the crucial disciplines in astronomy. A variety of imaging systems is used for meteor path reconstruction. The traditional approach is based on analysis of 2D image sequences obtained from a double station video observation system. Precise localization of meteor path is difficult due to atmospheric turbulence and other factors causing spatio-temporal fluctuations of the image background. The proposed technique performs non-linear preprocessing of image intensity using Box-Cox transform as recommended in our previous work. Both symmetric and asymmetric spatio-temporal differences are designed to be robust in the statistical sense. Resulting local patterns are processed by data whitening technique and obtained vectors are classified via cluster analysis and Self-Organized Map (SOM).

  5. Markerless identification of key events in gait cycle using image flow.

    PubMed

    Vishnoi, Nalini; Duric, Zoran; Gerber, Naomi Lynn

    2012-01-01

    Gait analysis has been an interesting area of research for several decades. In this paper, we propose image-flow-based methods to compute the motion and velocities of different body segments automatically, using a single inexpensive video camera. We then identify and extract different events of the gait cycle (double-support, mid-swing, toe-off and heel-strike) from video images. Experiments were conducted in which four walking subjects were captured from the sagittal plane. Automatic segmentation was performed to isolate the moving body from the background. The head excursion and the shank motion were then computed to identify the key frames corresponding to different events in the gait cycle. Our approach does not require calibrated cameras or special markers to capture movement. We have also compared our method with the Optotrak 3D motion capture system and found our results in good agreement with the Optotrak results. The development of our method has potential use in the markerless and unencumbered video capture of human locomotion. Monitoring gait in homes and communities provides a useful application for the aged and the disabled. Our method could potentially be used as an assessment tool to determine gait symmetry or to establish the normal gait pattern of an individual.

  6. Improving human object recognition performance using video enhancement techniques

    NASA Astrophysics Data System (ADS)

    Whitman, Lucy S.; Lewis, Colin; Oakley, John P.

    2004-12-01

    Atmospheric scattering causes significant degradation in the quality of video images, particularly when imaging over long distances. The principle problem is the reduction in contrast due to scattered light. It is known that when the scattering particles are not too large compared with the imaging wavelength (i.e. Mie scattering) then high spatial resolution information may be contained within a low-contrast image. Unfortunately this information is not easily perceived by a human observer, particularly when using a standard video monitor. A secondary problem is the difficulty of achieving a sharp focus since automatic focus techniques tend to fail in such conditions. Recently several commercial colour video processing systems have become available. These systems use various techniques to improve image quality in low contrast conditions whilst retaining colour content. These systems produce improvements in subjective image quality in some situations, particularly in conditions of haze and light fog. There is also some evidence that video enhancement leads to improved ATR performance when used as a pre-processing stage. Psychological literature indicates that low contrast levels generally lead to a reduction in the performance of human observers in carrying out simple visual tasks. The aim of this paper is to present the results of an empirical study on object recognition in adverse viewing conditions. The chosen visual task was vehicle number plate recognition at long ranges (500 m and beyond). Two different commercial video enhancement systems are evaluated using the same protocol. The results show an increase in effective range with some differences between the different enhancement systems.

  7. A Marker-less Monitoring System for Movement Analysis of Infants Using Video Images

    NASA Astrophysics Data System (ADS)

    Shima, Keisuke; Osawa, Yuko; Bu, Nan; Tsuji, Tokuo; Tsuji, Toshio; Ishii, Idaku; Matsuda, Hiroshi; Orito, Kensuke; Ikeda, Tomoaki; Noda, Shunichi

    This paper proposes a marker-less motion measurement and analysis system for infants. This system calculates eight types of evaluation indices related to the movement of an infant such as “amount of body motion” and “activity of body” from binary images that are extracted from video images using the background difference and frame difference. Thus, medical doctors can intuitively understand the movements of infants without long-term observations, and this may be helpful in supporting their diagnoses and detecting disabilities and diseases in the early stages. The distinctive feature of this system is that the movements of infants can be measured without using any markers for motion capture and thus it is expected that the natural and inherent tendencies of infants can be analyzed and evaluated. In this paper, the evaluation indices and features of movements between full-term infants (FTIs) and low birth weight infants (LBWIs) are compared using the developed prototype. We found that the amount of body motion and symmetry of upper and lower body movements of LBWIs became lower than those of FTIs. The difference between the movements of FTIs and LBWIs can be evaluated using the proposed system.

  8. Speckle reduction in echocardiography by temporal compounding and anisotropic diffusion filtering

    NASA Astrophysics Data System (ADS)

    Giraldo-Guzmán, Jader; Porto-Solano, Oscar; Cadena-Bonfanti, Alberto; Contreras-Ortiz, Sonia H.

    2015-01-01

    Echocardiography is a medical imaging technique based on ultrasound signals that is used to evaluate heart anatomy and physiology. Echocardiographic images are affected by speckle, a type of multiplicative noise that obscures details of the structures, and reduces the overall image quality. This paper shows an approach to enhance echocardiography using two processing techniques: temporal compounding and anisotropic diffusion filtering. We used twenty echocardiographic videos that include one or three cardiac cycles to test the algorithms. Two images from each cycle were aligned in space and averaged to obtain the compound images. These images were then processed using anisotropic diffusion filters to further improve their quality. Resultant images were evaluated using quality metrics and visual assessment by two medical doctors. The average total improvement on signal-to-noise ratio was up to 100.29% for videos with three cycles, and up to 32.57% for videos with one cycle.

  9. Video flow active control by means of adaptive shifted foveal geometries

    NASA Astrophysics Data System (ADS)

    Urdiales, Cristina; Rodriguez, Juan A.; Bandera, Antonio J.; Sandoval, Francisco

    2000-10-01

    This paper presents a control mechanism for video transmission that relies on transmitting non-uniform resolution images depending on the delay of the communication channel. These images are built in an active way to keep the areas of interest of the image at the highest resolution available. In order to shift the area of high resolution over the image and to achieve a data structure easy to process by using conventional algorithms, a shifted fovea multi resolution geometry of adaptive size is used. Besides, if delays are nevertheless too high, the different areas of resolution of the image can be transmitted at different rates. A functional system has been developed for corridor surveillance with static cameras. Tests with real video images have proven that the method allows an almost constant rate of images per second as long as the channel is not collapsed.

  10. Method of center localization for objects containing concentric arcs

    NASA Astrophysics Data System (ADS)

    Kuznetsova, Elena G.; Shvets, Evgeny A.; Nikolaev, Dmitry P.

    2015-02-01

    This paper proposes a method for automatic center location of objects containing concentric arcs. The method utilizes structure tensor analysis and voting scheme optimized with Fast Hough Transform. Two applications of the proposed method are considered: (i) wheel tracking in video-based system for automatic vehicle classification and (ii) tree growth rings analysis on a tree cross cut image.

  11. Refocusing Space Technology

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This video presents two examples of NASA Technology Transfer. The first is a Downhole Video Logger, which uses remote sensing technology to help in mining. The second example is the use of satellite image processing technology to enhance ultrasound images taken during pregnancy.

  12. Night Vision Goggle Training; Development and Production of Six Video Programs

    DTIC Science & Technology

    1992-11-01

    SUIUECT TERMS Multimedia Video production iS. NUMBER OF PAGES Aeral photography Night vision Videodisc 18 Image Intensification Night vision goggles...reference tool on the squadron or wing demonstrates NVG field of view, field of level. The programs run approximately ten regard, scan techniques, image...training device modalities. These The production of a videodisc that modalities include didactic and video will serve as an NVG audio-visual database

  13. Automated content and quality assessment of full-motion-video for the generation of meta data

    NASA Astrophysics Data System (ADS)

    Harguess, Josh

    2015-05-01

    Virtually all of the video data (and full-motion-video (FMV)) that is currently collected and stored in support of missions has been corrupted to various extents by image acquisition and compression artifacts. Additionally, video collected by wide-area motion imagery (WAMI) surveillance systems and unmanned aerial vehicles (UAVs) and similar sources is often of low quality or in other ways corrupted so that it is not worth storing or analyzing. In order to make progress in the problem of automatic video analysis, the first problem that should be solved is deciding whether the content of the video is even worth analyzing to begin with. We present a work in progress to address three types of scenes which are typically found in real-world data stored in support of Department of Defense (DoD) missions: no or very little motion in the scene, large occlusions in the scene, and fast camera motion. Each of these produce video that is generally not usable to an analyst or automated algorithm for mission support and therefore should be removed or flagged to the user as such. We utilize recent computer vision advances in motion detection and optical flow to automatically assess FMV for the identification and generation of meta-data (or tagging) of video segments which exhibit unwanted scenarios as described above. Results are shown on representative real-world video data.

  14. Dependency of human target detection performance on clutter and quality of supporting image analysis algorithms in a video surveillance task

    NASA Astrophysics Data System (ADS)

    Huber, Samuel; Dunau, Patrick; Wellig, Peter; Stein, Karin

    2017-10-01

    Background: In target detection, the success rates depend strongly on human observer performances. Two prior studies tested the contributions of target detection algorithms and prior training sessions. The aim of this Swiss-German cooperation study was to evaluate the dependency of human observer performance on the quality of supporting image analysis algorithms. Methods: The participants were presented 15 different video sequences. Their task was to detect all targets in the shortest possible time. Each video sequence showed a heavily cluttered simulated public area from a different viewing angle. In each video sequence, the number of avatars in the area was altered to 100, 150 and 200 subjects. The number of targets appearing was kept at 10%. The number of marked targets varied from 0, 5, 10, 20 up to 40 marked subjects while keeping the positive predictive value of the detection algorithm at 20%. During the task, workload level was assessed by applying an acoustic secondary task. Detection rates and detection times for the targets were analyzed using inferential statistics. Results: The study found Target Detection Time to increase and Target Detection Rates to decrease with increasing numbers of avatars. The same is true for the Secondary Task Reaction Time while there was no effect on Secondary Task Hit Rate. Furthermore, we found a trend for a u-shaped correlation between the numbers of markings and RTST indicating increased workload. Conclusion: The trial results may indicate useful criteria for the design of training and support of observers in observational tasks.

  15. Video-based noncooperative iris image segmentation.

    PubMed

    Du, Yingzi; Arslanturk, Emrah; Zhou, Zhi; Belcher, Craig

    2011-02-01

    In this paper, we propose a video-based noncooperative iris image segmentation scheme that incorporates a quality filter to quickly eliminate images without an eye, employs a coarse-to-fine segmentation scheme to improve the overall efficiency, uses a direct least squares fitting of ellipses method to model the deformed pupil and limbic boundaries, and develops a window gradient-based method to remove noise in the iris region. A remote iris acquisition system is set up to collect noncooperative iris video images. An objective method is used to quantitatively evaluate the accuracy of the segmentation results. The experimental results demonstrate the effectiveness of this method. The proposed method would make noncooperative iris recognition or iris surveillance possible.

  16. Lipid Vesicle Shape Analysis from Populations Using Light Video Microscopy and Computer Vision

    PubMed Central

    Zupanc, Jernej; Drašler, Barbara; Boljte, Sabina; Kralj-Iglič, Veronika; Iglič, Aleš; Erdogmus, Deniz; Drobne, Damjana

    2014-01-01

    We present a method for giant lipid vesicle shape analysis that combines manually guided large-scale video microscopy and computer vision algorithms to enable analyzing vesicle populations. The method retains the benefits of light microscopy and enables non-destructive analysis of vesicles from suspensions containing up to several thousands of lipid vesicles (1–50 µm in diameter). For each sample, image analysis was employed to extract data on vesicle quantity and size distributions of their projected diameters and isoperimetric quotients (measure of contour roundness). This process enables a comparison of samples from the same population over time, or the comparison of a treated population to a control. Although vesicles in suspensions are heterogeneous in sizes and shapes and have distinctively non-homogeneous distribution throughout the suspension, this method allows for the capture and analysis of repeatable vesicle samples that are representative of the population inspected. PMID:25426933

  17. Non-mydriatic video ophthalmoscope to measure fast temporal changes of the human retina

    NASA Astrophysics Data System (ADS)

    Tornow, Ralf P.; Kolář, Radim; Odstrčilík, Jan

    2015-07-01

    The analysis of fast temporal changes of the human retina can be used to get insight to normal physiological behavior and to detect pathological deviations. This can be important for the early detection of glaucoma and other eye diseases. We developed a small, lightweight, USB powered video ophthalmoscope that allows taking video sequences of the human retina with at least 25 frames per second without dilating the pupil. Short sequences (about 10 s) of the optic nerve head (20° x 15°) are recorded from subjects and registered offline using two-stage process (phase correlation and Lucas-Kanade approach) to compensate for eye movements. From registered video sequences, different parameters can be calculated. Two applications are described here: measurement of (i) cardiac cycle induced pulsatile reflection changes and (ii) eye movements and fixation pattern. Cardiac cycle induced pulsatile reflection changes are caused by changing blood volume in the retina. Waveform and pulse parameters like amplitude and rise time can be measured in any selected areas within the retinal image. Fixation pattern ΔY(ΔX) can be assessed from eye movements during video acquisition. The eye movements ΔX[t], ΔY[t] are derived from image registration results with high temporal (40 ms) and spatial (1,86 arcmin) resolution. Parameters of pulsatile reflection changes and fixation pattern can be affected in beginning glaucoma and the method described here may support early detection of glaucoma and other eye disease.

  18. The AAPM/RSNA physics tutorial for residents: digital fluoroscopy.

    PubMed

    Pooley, R A; McKinney, J M; Miller, D A

    2001-01-01

    A digital fluoroscopy system is most commonly configured as a conventional fluoroscopy system (tube, table, image intensifier, video system) in which the analog video signal is converted to and stored as digital data. Other methods of acquiring the digital data (eg, digital or charge-coupled device video and flat-panel detectors) will become more prevalent in the future. Fundamental concepts related to digital imaging in general include binary numbers, pixels, and gray levels. Digital image data allow the convenient use of several image processing techniques including last image hold, gray-scale processing, temporal frame averaging, and edge enhancement. Real-time subtraction of digital fluoroscopic images after injection of contrast material has led to widespread use of digital subtraction angiography (DSA). Additional image processing techniques used with DSA include road mapping, image fade, mask pixel shift, frame summation, and vessel size measurement. Peripheral angiography performed with an automatic moving table allows imaging of the peripheral vasculature with a single contrast material injection.

  19. Grayscale image segmentation for real-time traffic sign recognition: the hardware point of view

    NASA Astrophysics Data System (ADS)

    Cao, Tam P.; Deng, Guang; Elton, Darrell

    2009-02-01

    In this paper, we study several grayscale-based image segmentation methods for real-time road sign recognition applications on an FPGA hardware platform. The performance of different image segmentation algorithms in different lighting conditions are initially compared using PC simulation. Based on these results and analysis, suitable algorithms are implemented and tested on a real-time FPGA speed sign detection system. Experimental results show that the system using segmented images uses significantly less hardware resources on an FPGA while maintaining comparable system's performance. The system is capable of processing 60 live video frames per second.

  20. Three-dimensional real-time imaging of bi-phasic flow through porous media

    NASA Astrophysics Data System (ADS)

    Sharma, Prerna; Aswathi, P.; Sane, Anit; Ghosh, Shankar; Bhattacharya, S.

    2011-11-01

    We present a scanning laser-sheet video imaging technique to image bi-phasic flow in three-dimensional porous media in real time with pore-scale spatial resolution, i.e., 35 μm and 500 μm for directions parallel and perpendicular to the flow, respectively. The technique is illustrated for the case of viscous fingering. Using suitable image processing protocols, both the morphology and the movement of the two-fluid interface, were quantitatively estimated. Furthermore, a macroscopic parameter such as the displacement efficiency obtained from a microscopic (pore-scale) analysis demonstrates the versatility and usefulness of the method.

Top