Compression of computer generated phase-shifting hologram sequence using AVC and HEVC
NASA Astrophysics Data System (ADS)
Xing, Yafei; Pesquet-Popescu, Béatrice; Dufaux, Frederic
2013-09-01
With the capability of achieving twice the compression ratio of Advanced Video Coding (AVC) with similar reconstruction quality, High Efficiency Video Coding (HEVC) is expected to become the newleading technique of video coding. In order to reduce the storage and transmission burden of digital holograms, in this paper we propose to use HEVC for compressing the phase-shifting digital hologram sequences (PSDHS). By simulating phase-shifting digital holography (PSDH) interferometry, interference patterns between illuminated three dimensional( 3D) virtual objects and the stepwise phase changed reference wave are generated as digital holograms. The hologram sequences are obtained by the movement of the virtual objects and compressed by AVC and HEVC. The experimental results show that AVC and HEVC are efficient to compress PSDHS, with HEVC giving better performance. Good compression rate and reconstruction quality can be obtained with bitrate above 15000kbps.
Utilization of KSC Present Broadband Communications Data System for Digital Video Services
NASA Technical Reports Server (NTRS)
Andrawis, Alfred S.
2002-01-01
This report covers a visibility study of utilizing present KSC broadband communications data system (BCDS) for digital video services. Digital video services include compressed digital TV delivery and video-on-demand. Furthermore, the study examines the possibility of providing interactive video on demand to desktop personal computers via KSC computer network.
Utilization of KSC Present Broadband Communications Data System For Digital Video Services
NASA Technical Reports Server (NTRS)
Andrawis, Alfred S.
2001-01-01
This report covers a visibility study of utilizing present KSC broadband communications data system (BCDS) for digital video services. Digital video services include compressed digital TV delivery and video-on-demand. Furthermore, the study examines the possibility of providing interactive video on demand to desktop personal computers via KSC computer network.
Real-time demonstration hardware for enhanced DPCM video compression algorithm
NASA Technical Reports Server (NTRS)
Bizon, Thomas P.; Whyte, Wayne A., Jr.; Marcopoli, Vincent R.
1992-01-01
The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development along with implementation of a buffer control algorithm to accommodate the variable data rate output of the multilevel Huffman encoder. A video CODEC of this type could be used to compress NTSC color television signals where high quality reconstruction is desirable (e.g., Space Station video transmission, transmission direct-to-the-home via direct broadcast satellite systems or cable television distribution to system headends and direct-to-the-home).
Digital cinema video compression
NASA Astrophysics Data System (ADS)
Husak, Walter
2003-05-01
The Motion Picture Industry began a transition from film based distribution and projection to digital distribution and projection several years ago. Digital delivery and presentation offers the prospect to increase the quality of the theatrical experience for the audience, reduce distribution costs to the distributors, and create new business opportunities for the theater owners and the studios. Digital Cinema also presents an opportunity to provide increased flexibility and security of the movies for the content owners and the theater operators. Distribution of content via electronic means to theaters is unlike any of the traditional applications for video compression. The transition from film-based media to electronic media represents a paradigm shift in video compression techniques and applications that will be discussed in this paper.
Digital compression algorithms for HDTV transmission
NASA Technical Reports Server (NTRS)
Adkins, Kenneth C.; Shalkhauser, Mary JO; Bibyk, Steven B.
1990-01-01
Digital compression of video images is a possible avenue for high definition television (HDTV) transmission. Compression needs to be optimized while picture quality remains high. Two techniques for compression the digital images are explained and comparisons are drawn between the human vision system and artificial compression techniques. Suggestions for improving compression algorithms through the use of neural and analog circuitry are given.
Digital CODEC for real-time processing of broadcast quality video signals at 1.8 bits/pixel
NASA Technical Reports Server (NTRS)
Shalkhauser, Mary JO; Whyte, Wayne A., Jr.
1989-01-01
Advances in very large-scale integration and recent work in the field of bandwidth efficient digital modulation techniques have combined to make digital video processing technically feasible and potentially cost competitive for broadcast quality television transmission. A hardware implementation was developed for a DPCM-based digital television bandwidth compression algorithm which processes standard NTSC composite color television signals and produces broadcast quality video in real time at an average of 1.8 bits/pixel. The data compression algorithm and the hardware implementation of the CODEC are described, and performance results are provided.
Digital CODEC for real-time processing of broadcast quality video signals at 1.8 bits/pixel
NASA Technical Reports Server (NTRS)
Shalkhauser, Mary JO; Whyte, Wayne A.
1991-01-01
Advances in very large scale integration and recent work in the field of bandwidth efficient digital modulation techniques have combined to make digital video processing technically feasible an potentially cost competitive for broadcast quality television transmission. A hardware implementation was developed for DPCM (differential pulse code midulation)-based digital television bandwidth compression algorithm which processes standard NTSC composite color television signals and produces broadcast quality video in real time at an average of 1.8 bits/pixel. The data compression algorithm and the hardware implementation of the codec are described, and performance results are provided.
Towards a Visual Quality Metric for Digital Video
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1998-01-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.
Automated Assessment of Visual Quality of Digital Video
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Ellis, Stephen R. (Technical Monitor)
1997-01-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images[1-4]. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.
Barbier, Paolo; Alimento, Marina; Berna, Giovanni; Celeste, Fabrizio; Gentile, Francesco; Mantero, Antonio; Montericcio, Vincenzo; Muratori, Manuela
2007-05-01
Large files produced by standard compression algorithms slow down spread of digital and tele-echocardiography. We validated echocardiographic video high-grade compression with the new Motion Pictures Expert Groups (MPEG)-4 algorithms with a multicenter study. Seven expert cardiologists blindly scored (5-point scale) 165 uncompressed and compressed 2-dimensional and color Doppler video clips, based on combined diagnostic content and image quality (uncompressed files as references). One digital video and 3 MPEG-4 algorithms (WM9, MV2, and DivX) were used, the latter at 3 compression levels (0%, 35%, and 60%). Compressed file sizes decreased from 12 to 83 MB to 0.03 to 2.3 MB (1:1051-1:26 reduction ratios). Mean SD of differences was 0.81 for intraobserver variability (uncompressed and digital video files). Compared with uncompressed files, only the DivX mean score at 35% (P = .04) and 60% (P = .001) compression was significantly reduced. At subcategory analysis, these differences were still significant for gray-scale and fundamental imaging but not for color or second harmonic tissue imaging. Original image quality, session sequence, compression grade, and bitrate were all independent determinants of mean score. Our study supports use of MPEG-4 algorithms to greatly reduce echocardiographic file sizes, thus facilitating archiving and transmission. Quality evaluation studies should account for the many independent variables that affect image quality grading.
Digital Video of Live-Scan Fingerprint Data
National Institute of Standards and Technology Data Gateway
NIST Digital Video of Live-Scan Fingerprint Data (PC database for purchase) NIST Special Database 24 contains MPEG-2 (Moving Picture Experts Group) compressed digital video of live-scan fingerprint data. The database is being distributed for use in developing and testing of fingerprint verification systems.
Digital Video (DV): A Primer for Developing an Enterprise Video Strategy
NASA Astrophysics Data System (ADS)
Talovich, Thomas L.
2002-09-01
The purpose of this thesis is to provide an overview of digital video production and delivery. The thesis presents independent research demonstrating the educational value of incorporating video and multimedia content in training and education programs. The thesis explains the fundamental concepts associated with the process of planning, preparing, and publishing video content and assists in the development of follow-on strategies for incorporation of video content into distance training and education programs. The thesis provides an overview of the following technologies: Digital Video, Digital Video Editors, Video Compression, Streaming Video, and Optical Storage Media.
Data compression techniques applied to high resolution high frame rate video technology
NASA Technical Reports Server (NTRS)
Hartz, William G.; Alexovich, Robert E.; Neustadter, Marc S.
1989-01-01
An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended.
The Coming of Digital Desktop Media.
ERIC Educational Resources Information Center
Galbreath, Jeremy
1992-01-01
Discusses the movement toward digital-based platforms including full-motion video for multimedia products. Hardware- and software-based compression techniques for digital data storage are considered, and a chart summarizes features of Digital Video Interactive, Moving Pictures Experts Group, P x 64, Joint Photographic Experts Group, Apple…
NASA Astrophysics Data System (ADS)
Tinker, Michael
1998-12-01
We are on the brink of transforming the movie theatre with electronic cinema. Technologies are converging to make true electronic cinema, with a 'film look,' possible for the first time. In order to realize the possibilities, we must leverage current technologies in video compression, electronic projection, digital storage, and digital networks. All these technologies have only recently improved sufficiently to make their use in the electronic cinema worthwhile. Video compression, such as MPEG-2, is designed to overcome the limitations of video, primarily limited bandwidth. As a result, although HDTV offers a serious challenge to film-based cinema, it falls short in a number of areas, such as color depth. Freed from the constraints of video transmission, and using the recently improved technologies available, electronic cinema can move beyond video; Although movies will have to be compressed for some time, what is needed is a concept of 'cinema compression,' rather than video compression. Electronic cinema will open up vast new possibilities for viewing experiences at the theater, while at the same time offering up the potential for new economies in the movie industry.
Real-time transmission of digital video using variable-length coding
NASA Technical Reports Server (NTRS)
Bizon, Thomas P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.
1993-01-01
Huffman coding is a variable-length lossless compression technique where data with a high probability of occurrence is represented with short codewords, while 'not-so-likely' data is assigned longer codewords. Compression is achieved when the high-probability levels occur so frequently that their benefit outweighs any penalty paid when a less likely input occurs. One instance where Huffman coding is extremely effective occurs when data is highly predictable and differential coding can be applied (as with a digital video signal). For that reason, it is desirable to apply this compression technique to digital video transmission; however, special care must be taken in order to implement a communication protocol utilizing Huffman coding. This paper addresses several of the issues relating to the real-time transmission of Huffman-coded digital video over a constant-rate serial channel. Topics discussed include data rate conversion (from variable to a fixed rate), efficient data buffering, channel coding, recovery from communication errors, decoder synchronization, and decoder architectures. A description of the hardware developed to execute Huffman coding and serial transmission is also included. Although this paper focuses on matters relating to Huffman-coded digital video, the techniques discussed can easily be generalized for a variety of applications which require transmission of variable-length data.
Influence of video compression on the measurement error of the television system
NASA Astrophysics Data System (ADS)
Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.
2015-05-01
Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also possible reducing of the digital stream. Discrete cosine transformation is most widely used among possible orthogonal transformation. Errors of television measuring systems and data compression protocols analyzed In this paper. The main characteristics of measuring systems and detected sources of their error detected. The most effective methods of video compression are determined. The influence of video compression error on television measuring systems was researched. Obtained results will increase the accuracy of the measuring systems. In television image quality measuring system reduces distortion identical distortion in analog systems and specific distortions resulting from the process of coding / decoding digital video signal and errors in the transmission channel. By the distortions associated with encoding / decoding signal include quantization noise, reducing resolution, mosaic effect, "mosquito" effect edging on sharp drops brightness, blur colors, false patterns, the effect of "dirty window" and other defects. The size of video compression algorithms used in television measuring systems based on the image encoding with intra- and inter prediction individual fragments. The process of encoding / decoding image is non-linear in space and in time, because the quality of the playback of a movie at the reception depends on the pre- and post-history of a random, from the preceding and succeeding tracks, which can lead to distortion of the inadequacy of the sub-picture and a corresponding measuring signal.
NASA Technical Reports Server (NTRS)
1996-01-01
Optivision developed two PC-compatible boards and associated software under a Goddard Space Flight Center Small Business Innovation Research grant for NASA applications in areas such as telerobotics, telesciences and spaceborne experimentation. From this technology, the company used its own funds to develop commercial products, the OPTIVideo MPEG Encoder and Decoder, which are used for realtime video compression and decompression. They are used in commercial applications including interactive video databases and video transmission. The encoder converts video source material to a compressed digital form that can be stored or transmitted, and the decoder decompresses bit streams to provide high quality playback.
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.
2001-12-01
A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.
Digital Motion Imagery, Interoperability Challenges for Space Operations
NASA Technical Reports Server (NTRS)
Grubbs, Rodney
2012-01-01
With advances in available bandwidth from spacecraft and between terrestrial control centers, digital motion imagery and video is becoming more practical as a data gathering tool for science and engineering, as well as for sharing missions with the public. The digital motion imagery and video industry has done a good job of creating standards for compression, distribution, and physical interfaces. Compressed data streams can easily be transmitted or distributed over radio frequency, internet protocol, and other data networks. All of these standards, however, can make sharing video between spacecraft and terrestrial control centers a frustrating and complicated task when different standards and protocols are used by different agencies. This paper will explore the challenges presented by the abundance of motion imagery and video standards, interfaces and protocols with suggestions for common formats that could simplify interoperability between spacecraft and ground support systems. Real-world examples from the International Space Station will be examined. The paper will also discuss recent trends in the development of new video compression algorithms, as well likely expanded use of Delay (or Disruption) Tolerant Networking nodes.
NASA Technical Reports Server (NTRS)
1975-01-01
Two digital video data compression systems directly applicable to the Space Shuttle TV Communication System were described: (1) For the uplink, a low rate monochrome data compressor is used. The compression is achieved by using a motion detection technique in the Hadamard domain. To transform the variable source rate into a fixed rate, an adaptive rate buffer is provided. (2) For the downlink, a color data compressor is considered. The compression is achieved first by intra-color transformation of the original signal vector, into a vector which has lower information entropy. Then two-dimensional data compression techniques are applied to the Hadamard transformed components of this last vector. Mathematical models and data reliability analyses were also provided for the above video data compression techniques transmitted over a channel encoded Gaussian channel. It was shown that substantial gains can be achieved by the combination of video source and channel coding.
NASA Astrophysics Data System (ADS)
Duplaga, M.; Leszczuk, M. I.; Papir, Z.; Przelaskowski, A.
2008-12-01
Wider dissemination of medical digital video libraries is affected by two correlated factors, resource effective content compression that directly influences its diagnostic credibility. It has been proved that it is possible to meet these contradictory requirements halfway for long-lasting and low motion surgery recordings at compression ratios close to 100 (bronchoscopic procedures were a case study investigated). As the main supporting assumption, it has been accepted that the content can be compressed as far as clinicians are not able to sense a loss of video diagnostic fidelity (a visually lossless compression). Different market codecs were inspected by means of the combined subjective and objective tests toward their usability in medical video libraries. Subjective tests involved a panel of clinicians who had to classify compressed bronchoscopic video content according to its quality under the bubble sort algorithm. For objective tests, two metrics (hybrid vector measure and hosaka Plots) were calculated frame by frame and averaged over a whole sequence.
CTS digital video college curriculum-sharing experiment. [Communications Technology Satellite
NASA Technical Reports Server (NTRS)
Lumb, D. R.; Sites, M. J.
1974-01-01
NASA-Ames Research Center, Stanford University, and Carleton University, Ottawa, Canada, are participating in a joint experiment to evaluate the feasibility and effectiveness of college curriculum sharing using compressed digital television and the Communications Technology Satellite (CTS). Each university will offer televised courses to the other during the 1976-1977 academic year via CTS, a joint program by NASA and the Canadian Department of Communications. The video compression techniques to be demonstrated will enable economical interconnection of educational institutions using existing and planned domestic satellites.
A study on multiresolution lossless video coding using inter/intra frame adaptive prediction
NASA Astrophysics Data System (ADS)
Nakachi, Takayuki; Sawabe, Tomoko; Fujii, Tetsuro
2003-06-01
Lossless video coding is required in the fields of archiving and editing digital cinema or digital broadcasting contents. This paper combines a discrete wavelet transform and adaptive inter/intra-frame prediction in the wavelet transform domain to create multiresolution lossless video coding. The multiresolution structure offered by the wavelet transform facilitates interchange among several video source formats such as Super High Definition (SHD) images, HDTV, SDTV, and mobile applications. Adaptive inter/intra-frame prediction is an extension of JPEG-LS, a state-of-the-art lossless still image compression standard. Based on the image statistics of the wavelet transform domains in successive frames, inter/intra frame adaptive prediction is applied to the appropriate wavelet transform domain. This adaptation offers superior compression performance. This is achieved with low computational cost and no increase in additional information. Experiments on digital cinema test sequences confirm the effectiveness of the proposed algorithm.
Picture data compression coder using subband/transform coding with a Lempel-Ziv-based coder
NASA Technical Reports Server (NTRS)
Glover, Daniel R. (Inventor)
1995-01-01
Digital data coders/decoders are used extensively in video transmission. A digitally encoded video signal is separated into subbands. Separating the video into subbands allows transmission at low data rates. Once the data is separated into these subbands it can be coded and then decoded by statistical coders such as the Lempel-Ziv based coder.
Toward a perceptual video-quality metric
NASA Astrophysics Data System (ADS)
Watson, Andrew B.
1998-07-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating the visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics, and the economic need to reduce bit-rate to the lowest level that yields acceptable quality. In previous work, we have developed visual quality metrics for evaluating, controlling,a nd optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. Here I describe a new video quality metric that is an extension of these still image metrics into the time domain. Like the still image metrics, it is based on the Discrete Cosine Transform. An effort has been made to minimize the amount of memory and computation required by the metric, in order that might be applied in the widest range of applications. To calibrate the basic sensitivity of this metric to spatial and temporal signals we have made measurements of visual thresholds for temporally varying samples of DCT quantization noise.
Review of passive-blind detection in digital video forgery based on sensing and imaging techniques
NASA Astrophysics Data System (ADS)
Tao, Junjie; Jia, Lili; You, Ying
2016-01-01
Advances in digital video compression and IP communication technologies raised new issues and challenges concerning the integrity and authenticity of surveillance videos. It is so important that the system should ensure that once recorded, the video cannot be altered; ensuring the audit trail is intact for evidential purposes. This paper gives an overview of passive techniques of Digital Video Forensics which are based on intrinsic fingerprints inherent in digital surveillance videos. In this paper, we performed a thorough research of literatures relevant to video manipulation detection methods which accomplish blind authentications without referring to any auxiliary information. We presents review of various existing methods in literature, and much more work is needed to be done in this field of video forensics based on video data analysis and observation of the surveillance systems.
Quality and noise measurements in mobile phone video capture
NASA Astrophysics Data System (ADS)
Petrescu, Doina; Pincenti, John
2011-02-01
The quality of videos captured with mobile phones has become increasingly important particularly since resolutions and formats have reached a level that rivals the capabilities available in the digital camcorder market, and since many mobile phones now allow direct playback on large HDTVs. The video quality is determined by the combined quality of the individual parts of the imaging system including the image sensor, the digital color processing, and the video compression, each of which has been studied independently. In this work, we study the combined effect of these elements on the overall video quality. We do this by evaluating the capture under various lighting, color processing, and video compression conditions. First, we measure full reference quality metrics between encoder input and the reconstructed sequence, where the encoder input changes with light and color processing modifications. Second, we introduce a system model which includes all elements that affect video quality, including a low light additive noise model, ISP color processing, as well as the video encoder. Our experiments show that in low light conditions and for certain choices of color processing the system level visual quality may not improve when the encoder becomes more capable or the compression ratio is reduced.
A new method for digital video documentation in surgical procedures and minimally invasive surgery.
Wurnig, P N; Hollaus, P H; Wurnig, C H; Wolf, R K; Ohtsuka, T; Pridun, N S
2003-02-01
Documentation of surgical procedures is limited to the accuracy of description, which depends on the vocabulary and the descriptive prowess of the surgeon. Even analog video recording could not solve the problem of documentation satisfactorily due to the abundance of recorded material. By capturing the video digitally, most problems are solved in the circumstances described in this article. We developed a cheap and useful digital video capturing system that consists of conventional computer components. Video images and clips can be captured intraoperatively and are immediately available. The system is a commercial personal computer specially configured for digital video capturing and is connected by wire to the video tower. Filming was done with a conventional endoscopic video camera. A total of 65 open and endoscopic procedures were documented in an orthopedic and a thoracic surgery unit. The median number of clips per surgical procedure was 6 (range, 1-17), and the median storage volume was 49 MB (range, 3-360 MB) in compressed form. The median duration of a video clip was 4 min 25 s (range, 45 s to 21 min). Median time for editing a video clip was 12 min for an advanced user (including cutting, title for the movie, and compression). The quality of the clips renders them suitable for presentations. This digital video documentation system allows easy capturing of intraoperative video sequences in high quality. All possibilities of documentation can be performed. With the use of an endoscopic video camera, no compromises with respect to sterility and surgical elbowroom are necessary. The cost is much lower than commercially available systems, and setting changes can be performed easily without trained specialists.
Compressive Video Acquisition, Fusion and Processing
2010-12-14
architecture that comprises an optical computer (comprising a digital micromirror device, two lenses, a single photon detector, and an analog-to-digital (A/D...of the missing video frames with reasonable accuracy. Also, the similar nature of the four curves suggests that the actual values of (Ωx,Ωh,Γ) are not
Using Compressed Video To Coach/Mentor Distant Teacher Interns.
ERIC Educational Resources Information Center
Hakes, Barbara; And Others
Wyoming, a rural state with a small population scattered over vast geographic areas, brought a compressed digital video network online to connect the University of Wyoming and the State's seven community colleges. The College of Education at the University received a grant to develop a coaching/mentoring model for teacher interns over distance.…
Digital codec for real-time processing of broadcast quality video signals at 1.8 bits/pixel
NASA Technical Reports Server (NTRS)
Shalkhauser, Mary JO; Whyte, Wayne A., Jr.
1989-01-01
The authors present the hardware implementation of a digital television bandwidth compression algorithm which processes standard NTSC (National Television Systems Committee) composite color television signals and produces broadcast-quality video in real time at an average of 1.8 b/pixel. The sampling rate used with this algorithm results in 768 samples over the active portion of each video line by 512 active video lines per video frame. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a nonadaptive predictor, nonuniform quantizer, and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The nonadaptive predictor and multilevel Huffman coder combine to set this technique apart from prior-art DPCM encoding algorithms. The authors describe the data compression algorithm and the hardware implementation of the codec and provide performance results.
NASA Astrophysics Data System (ADS)
Stewart, Brent K.; Carter, Stephen J.; Langer, Steven G.; Andrew, Rex K.
1998-06-01
Experiments using NASA's Advanced Communications Technology Satellite were conducted to provide an estimate of the compressed video quality required for preservation of clinically relevant features for the detection of trauma. Bandwidth rates of 128, 256 and 384 kbps were used. A five point Likert scale (1 equals no useful information and 5 equals good diagnostic quality) was used for a subjective preference questionnaire to evaluate the quality of the compressed ultrasound imagery at the three compression rates for several anatomical regions of interest. At 384 kbps the Likert scores (mean plus or minus SD) were abdomen (4.45 plus or minus 0.71), carotid artery (4.70 plus or minus 0.36), kidney (5.0 plus or minus 0.0), liver (4.67 plus or minus 0.58) and thyroid (4.03 plus or minus 0.74). Due to the volatile nature of the H.320 compressed digital video stream, no statistically significant results can be derived through this methodology. As the MPEG standard has at its roots many of the same intraframe and motion vector compression algorithms as the H.261 (such as that used in the previous ACTS/AMT experiments), we are using the MPEG compressed video sequences to best gauge what minimum bandwidths are necessary for preservation of clinically relevant features for the detection of trauma. We have been using an MPEG codec board to collect losslessly compressed video clips from high quality S- VHS tapes and through direct digitization of S-video. Due to the large number of videoclips and questions to be presented to the radiologists and for ease of application, we have developed a web browser interface for this video visual perception study. Due to the large numbers of observations required to reach statistical significance in most ROC studies, Kappa statistical analysis is used to analyze the degree of agreement between observers and between viewing assessment. If the degree of agreement amongst readers is high, then there is a possibility that the ratings (i.e., average Likert score at each bandwidth) do in fact reflect the dimension they are purported to reflect (video quality versus bandwidth). It is then possible to make intelligent choice of bandwidth for streaming compressed video and compressed videoclips.
VLSI-based video event triggering for image data compression
NASA Astrophysics Data System (ADS)
Williams, Glenn L.
1994-02-01
Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.
VLSI-based Video Event Triggering for Image Data Compression
NASA Technical Reports Server (NTRS)
Williams, Glenn L.
1994-01-01
Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.
Dynamic code block size for JPEG 2000
NASA Astrophysics Data System (ADS)
Tsai, Ping-Sing; LeCornec, Yann
2008-02-01
Since the standardization of the JPEG 2000, it has found its way into many different applications such as DICOM (digital imaging and communication in medicine), satellite photography, military surveillance, digital cinema initiative, professional video cameras, and so on. The unified framework of the JPEG 2000 architecture makes practical high quality real-time compression possible even in video mode, i.e. motion JPEG 2000. In this paper, we present a study of the compression impact using dynamic code block size instead of fixed code block size as specified in the JPEG 2000 standard. The simulation results show that there is no significant impact on compression if dynamic code block sizes are used. In this study, we also unveil the advantages of using dynamic code block sizes.
NASA Astrophysics Data System (ADS)
Aghamaleki, Javad Abbasi; Behrad, Alireza
2018-01-01
Double compression detection is a crucial stage in digital image and video forensics. However, the detection of double compressed videos is challenging when the video forger uses the same quantization matrix and synchronized group of pictures (GOP) structure during the recompression history to conceal tampering effects. A passive approach is proposed for detecting double compressed MPEG videos with the same quantization matrix and synchronized GOP structure. To devise the proposed algorithm, the effects of recompression on P frames are mathematically studied. Then, based on the obtained guidelines, a feature vector is proposed to detect double compressed frames on the GOP level. Subsequently, sparse representations of the feature vectors are used for dimensionality reduction and enrich the traces of recompression. Finally, a support vector machine classifier is employed to detect and localize double compression in temporal domain. The experimental results show that the proposed algorithm achieves the accuracy of more than 95%. In addition, the comparisons of the results of the proposed method with those of other methods reveal the efficiency of the proposed algorithm.
First Digit Law and Its Application to Digital Forensics
NASA Astrophysics Data System (ADS)
Shi, Yun Q.
Digital data forensics, which gathers evidence of data composition, origin, and history, is crucial in our digital world. Although this new research field is still in its infancy stage, it has started to attract increasing attention from the multimedia-security research community. This lecture addresses the first digit law and its applications to digital forensics. First, the Benford and generalized Benford laws, referred to as first digit law, are introduced. Then, the application of first digit law to detection of JPEG compression history for a given BMP image and detection of double JPEG compressions are presented. Finally, applying first digit law to detection of double MPEG video compressions is discussed. It is expected that the first digit law may play an active role in other task of digital forensics. The lesson learned is that statistical models play an important role in digital forensics and for a specific forensic task different models may provide different performance.
Potential digitization/compression techniques for Shuttle video
NASA Technical Reports Server (NTRS)
Habibi, A.; Batson, B. H.
1978-01-01
The Space Shuttle initially will be using a field-sequential color television system but it is possible that an NTSC color TV system may be used for future missions. In addition to downlink color TV transmission via analog FM links, the Shuttle will use a high resolution slow-scan monochrome system for uplink transmission of text and graphics information. This paper discusses the characteristics of the Shuttle video systems, and evaluates digitization and/or bandwidth compression techniques for the various links. The more attractive techniques for the downlink video are based on a two-dimensional DPCM encoder that utilizes temporal and spectral as well as the spatial correlation of the color TV imagery. An appropriate technique for distortion-free coding of the uplink system utilizes two-dimensional HCK codes.
NASA Astrophysics Data System (ADS)
Bartolini, Franco; Pasquini, Cristina; Piva, Alessandro
2001-04-01
The recent development of video compression algorithms allowed the diffusion of systems for the transmission of video sequences over data networks. However, the transmission over error prone mobile communication channels is yet an open issue. In this paper, a system developed for the real time transmission of H263 video coded sequences over TETRA mobile networks is presented. TETRA is an open digital trunked radio standard defined by the European Telecommunications Standardization Institute developed for professional mobile radio users, providing full integration of voice and data services. Experimental tests demonstrate that, in spite of the low frame rate allowed by the SW only implementation of the decoder and by the low channel rate a video compression technique such as that complying with the H263 standard, is still preferable to a simpler but less effective frame based compression system.
Selective encryption for H.264/AVC video coding
NASA Astrophysics Data System (ADS)
Shi, Tuo; King, Brian; Salama, Paul
2006-02-01
Due to the ease with which digital data can be manipulated and due to the ongoing advancements that have brought us closer to pervasive computing, the secure delivery of video and images has become a challenging problem. Despite the advantages and opportunities that digital video provide, illegal copying and distribution as well as plagiarism of digital audio, images, and video is still ongoing. In this paper we describe two techniques for securing H.264 coded video streams. The first technique, SEH264Algorithm1, groups the data into the following blocks of data: (1) a block that contains the sequence parameter set and the picture parameter set, (2) a block containing a compressed intra coded frame, (3) a block containing the slice header of a P slice, all the headers of the macroblock within the same P slice, and all the luma and chroma DC coefficients belonging to the all the macroblocks within the same slice, (4) a block containing all the ac coefficients, and (5) a block containing all the motion vectors. The first three are encrypted whereas the last two are not. The second method, SEH264Algorithm2, relies on the use of multiple slices per coded frame. The algorithm searches the compressed video sequence for start codes (0x000001) and then encrypts the next N bits of data.
A study of data coding technology developments in the 1980-1985 time frame, volume 2
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Shahsavari, M. M.
1978-01-01
The source parameters of digitized analog data are discussed. Different data compression schemes are outlined and analysis of their implementation are presented. Finally, bandwidth compression techniques are given for video signals.
Digital storage and analysis of color Doppler echocardiograms
NASA Technical Reports Server (NTRS)
Chandra, S.; Thomas, J. D.
1997-01-01
Color Doppler flow mapping has played an important role in clinical echocardiography. Most of the clinical work, however, has been primarily qualitative. Although qualitative information is very valuable, there is considerable quantitative information stored within the velocity map that has not been extensively exploited so far. Recently, many researchers have shown interest in using the encoded velocities to address the clinical problems such as quantification of valvular regurgitation, calculation of cardiac output, and characterization of ventricular filling. In this article, we review some basic physics and engineering aspects of color Doppler echocardiography, as well as drawbacks of trying to retrieve velocities from video tape data. Digital storage, which plays a critical role in performing quantitative analysis, is discussed in some detail with special attention to velocity encoding in DICOM 3.0 (medical image storage standard) and the use of digital compression. Lossy compression can considerably reduce file size with minimal loss of information (mostly redundant); this is critical for digital storage because of the enormous amount of data generated (a 10 minute study could require 18 Gigabytes of storage capacity). Lossy JPEG compression and its impact on quantitative analysis has been studied, showing that images compressed at 27:1 using the JPEG algorithm compares favorably with directly digitized video images, the current goldstandard. Some potential applications of these velocities in analyzing the proximal convergence zones, mitral inflow, and some areas of future development are also discussed in the article.
Visual content highlighting via automatic extraction of embedded captions on MPEG compressed video
NASA Astrophysics Data System (ADS)
Yeo, Boon-Lock; Liu, Bede
1996-03-01
Embedded captions in TV programs such as news broadcasts, documentaries and coverage of sports events provide important information on the underlying events. In digital video libraries, such captions represent a highly condensed form of key information on the contents of the video. In this paper we propose a scheme to automatically detect the presence of captions embedded in video frames. The proposed method operates on reduced image sequences which are efficiently reconstructed from compressed MPEG video and thus does not require full frame decompression. The detection, extraction and analysis of embedded captions help to capture the highlights of visual contents in video documents for better organization of video, to present succinctly the important messages embedded in the images, and to facilitate browsing, searching and retrieval of relevant clips.
Indexing and retrieval of MPEG compressed video
NASA Astrophysics Data System (ADS)
Kobla, Vikrant; Doermann, David S.
1998-04-01
To keep pace with the increased popularity of digital video as an archival medium, the development of techniques for fast and efficient analysis of ideo streams is essential. In particular, solutions to the problems of storing, indexing, browsing, and retrieving video data from large multimedia databases are necessary to a low access to these collections. Given that video is often stored efficiently in a compressed format, the costly overhead of decompression can be reduced by analyzing the compressed representation directly. In earlier work, we presented compressed domain parsing techniques which identified shots, subshots, and scenes. In this article, we present efficient key frame selection, feature extraction, indexing, and retrieval techniques that are directly applicable to MPEG compressed video. We develop a frame type independent representation which normalizes spatial and temporal features including frame type, frame size, macroblock encoding, and motion compensation vectors. Features for indexing are derived directly from this representation and mapped to a low- dimensional space where they can be accessed using standard database techniques. Spatial information is used as primary index into the database and temporal information is used to rank retrieved clips and enhance the robustness of the system. The techniques presented enable efficient indexing, querying, and retrieval of compressed video as demonstrated by our system which typically takes a fraction of a second to retrieve similar video scenes from a database, with over 95 percent recall.
Objective assessment of MPEG-2 video quality
NASA Astrophysics Data System (ADS)
Gastaldo, Paolo; Zunino, Rodolfo; Rovetta, Stefano
2002-07-01
The increasing use of video compression standards in broadcasting television systems has required, in recent years, the development of video quality measurements that take into account artifacts specifically caused by digital compression techniques. In this paper we present a methodology for the objective quality assessment of MPEG video streams by using circular back-propagation feedforward neural networks. Mapping neural networks can render nonlinear relationships between objective features and subjective judgments, thus avoiding any simplifying assumption on the complexity of the model. The neural network processes an instantaneous set of input values, and yields an associated estimate of perceived quality. Therefore, the neural-network approach turns objective quality assessment into adaptive modeling of subjective perception. The objective features used for the estimate are chosen according to the assessed relevance to perceived quality and are continuously extracted in real time from compressed video streams. The overall system mimics perception but does not require any analytical model of the underlying physical phenomenon. The capability to process compressed video streams represents an important advantage over existing approaches, like avoiding the stream-decoding process greatly enhances real-time performance. Experimental results confirm that the system provides satisfactory, continuous-time approximations for actual scoring curves concerning real test videos.
Innovative Video Diagnostic Equipment for Material Science
NASA Technical Reports Server (NTRS)
Capuano, G.; Titomanlio, D.; Soellner, W.; Seidel, A.
2012-01-01
Materials science experiments under microgravity increasingly rely on advanced optical systems to determine the physical properties of the samples under investigation. This includes video systems with high spatial and temporal resolution. The acquisition, handling, storage and transmission to ground of the resulting video data are very challenging. Since the available downlink data rate is limited, the capability to compress the video data significantly without compromising the data quality is essential. We report on the development of a Digital Video System (DVS) for EML (Electro Magnetic Levitator) which provides real-time video acquisition, high compression using advanced Wavelet algorithms, storage and transmission of a continuous flow of video with different characteristics in terms of image dimensions and frame rates. The DVS is able to operate with the latest generation of high-performance cameras acquiring high resolution video images up to 4Mpixels@60 fps or high frame rate video images up to about 1000 fps@512x512pixels.
Storage, retrieval, and edit of digital video using Motion JPEG
NASA Astrophysics Data System (ADS)
Sudharsanan, Subramania I.; Lee, D. H.
1994-04-01
In a companion paper we describe a Micro Channel adapter card that can perform real-time JPEG (Joint Photographic Experts Group) compression of a 640 by 480 24-bit image within 1/30th of a second. Since this corresponds to NTSC video rates at considerably good perceptual quality, this system can be used for real-time capture and manipulation of continuously fed video. To facilitate capturing the compressed video in a storage medium, an IBM Bus master SCSI adapter with cache is utilized. Efficacy of the data transfer mechanism is considerably improved using the System Control Block architecture, an extension to Micro Channel bus masters. We show experimental results that the overall system can perform at compressed data rates of about 1.5 MBytes/second sustained and with sporadic peaks to about 1.8 MBytes/second depending on the image sequence content. We also describe mechanisms to access the compressed data very efficiently through special file formats. This in turn permits creation of simpler sequence editors. Another advantage of the special file format is easy control of forward, backward and slow motion playback. The proposed method can be extended for design of a video compression subsystem for a variety of personal computing systems.
Development of a microportable imaging system for otoscopy and nasoendoscopy evaluations.
VanLue, Michael; Cox, Kenneth M; Wade, James M; Tapp, Kevin; Linville, Raymond; Cosmato, Charlie; Smith, Tom
2007-03-01
Imaging systems for patients with cleft palate typically are not portable, but are essential to obtain an audiovisual record of nasoendoscopy and otoscopy procedures. Practitioners who evaluate patients in rural, remote, or otherwise medically underserved areas are expected to obtain audiovisual recordings of these procedures as part of standard clinical practice. Therefore, patients must travel substantial distances to medical facilities that have standard recording equipment. This project describes the specific components, strengths and weaknesses of an MPEG-4 digital recording system for otoscopy/nasoendoscopy evaluation of patients with cleft palate that is both portable and compatible with store-and-forward telemedicine applications. Three digital recording configurations (TabletPC, handheld digital video recorder, and an 8-mm digital camcorder) were used to record the audio/ video signal from an analog video scope system. The handheld digital video recorder was most effective at capturing audio/video and displaying procedures in real time. The system described was particularly easy to use, because it required no postrecording file capture or compression for later review, transfer, and/or archiving. The handheld digital recording system was assembled from commercially available components. The portability and the telemedicine compatibility of the handheld digital video recorder offers a viable solution for the documentation of nasoendosocopy and otoscopy procedures in remote, rural, or other locations where reduced medical access precludes the use of larger component audio/video systems.
Two-way digital communications
NASA Astrophysics Data System (ADS)
Glenn, William E.; Daly, Ed
1996-03-01
The communications industry has been rapidly converting from analog to digital communications for audio, video, and data. The initial applications have been concentrating on point-to-multipoint transmission. Currently, a new revolution is occurring in which two-way point-to-point transmission is a rapidly growing market. The system designs for video compression developed for point-to-multipoint transmission are unsuitable for this new market as well as for satellite based video encoding. A new system developed by the Space Communications Technology Center has been designed to address both of these newer applications. An update on the system performance and design will be given.
NASA Technical Reports Server (NTRS)
Scott, D. W.
1994-01-01
This report describes efforts to use digital motion video compression technology to develop a highly portable device that would convert 1990-91 era IBM-compatible and/or MacIntosh notebook computers into full-color, motion-video capable multimedia training systems. An architecture was conceived that would permit direct conversion of existing laser-disk-based multimedia courses with little or no reauthoring. The project did not physically demonstrate certain critical video keying techniques, but their implementation should be feasible. This investigation of digital motion video has spawned two significant spaceflight projects at MSFC: one to downlink multiple high-quality video signals from Spacelab, and the other to uplink videoconference-quality video in realtime and high quality video off-line, plus investigate interactive, multimedia-based techniques for enhancing onboard science operations. Other airborne or spaceborne spinoffs are possible.
Gradual cut detection using low-level vision for digital video
NASA Astrophysics Data System (ADS)
Lee, Jae-Hyun; Choi, Yeun-Sung; Jang, Ok-bae
1996-09-01
Digital video computing and organization is one of the important issues in multimedia system, signal compression, or database. Video should be segmented into shots to be used for identification and indexing. This approach requires a suitable method to automatically locate cut points in order to separate shot in a video. Automatic cut detection to isolate shots in a video has received considerable attention due to many practical applications; our video database, browsing, authoring system, retrieval and movie. Previous studies are based on a set of difference mechanisms and they measured the content changes between video frames. But they could not detect more special effects which include dissolve, wipe, fade-in, fade-out, and structured flashing. In this paper, a new cut detection method for gradual transition based on computer vision techniques is proposed. And then, experimental results applied to commercial video are presented and evaluated.
Digital Signal Processing For Low Bit Rate TV Image Codecs
NASA Astrophysics Data System (ADS)
Rao, K. R.
1987-06-01
In view of the 56 KBPS digital switched network services and the ISDN, low bit rate codecs for providing real time full motion color video are under various stages of development. Some companies have already brought the codecs into the market. They are being used by industry and some Federal Agencies for video teleconferencing. In general, these codecs have various features such as multiplexing audio and data, high resolution graphics, encryption, error detection and correction, self diagnostics, freezeframe, split video, text overlay etc. To transmit the original color video on a 56 KBPS network requires bit rate reduction of the order of 1400:1. Such a large scale bandwidth compression can be realized only by implementing a number of sophisticated,digital signal processing techniques. This paper provides an overview of such techniques and outlines the newer concepts that are being investigated. Before resorting to the data compression techniques, various preprocessing operations such as noise filtering, composite-component transformation and horizontal and vertical blanking interval removal are to be implemented. Invariably spatio-temporal subsampling is achieved by appropriate filtering. Transform and/or prediction coupled with motion estimation and strengthened by adaptive features are some of the tools in the arsenal of the data reduction methods. Other essential blocks in the system are quantizer, bit allocation, buffer, multiplexer, channel coding etc.
Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction
Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin
2016-01-01
High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems. PMID:27814367
Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction.
Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin
2016-01-01
High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems.
Compression of stereoscopic video using MPEG-2
NASA Astrophysics Data System (ADS)
Puri, A.; Kollarits, Richard V.; Haskell, Barry G.
1995-10-01
Many current as well as emerging applications in areas of entertainment, remote operations, manufacturing industry and medicine can benefit from the depth perception offered by stereoscopic video systems which employ two views of a scene imaged under the constraints imposed by human visual system. Among the many challenges to be overcome for practical realization and widespread use of 3D/stereoscopic systems are good 3D displays and efficient techniques for digital compression of enormous amounts of data while maintaining compatibility with normal video decoding and display systems. After a brief introduction to the basics of 3D/stereo including issues of depth perception, stereoscopic 3D displays and terminology in stereoscopic imaging and display, we present an overview of tools in the MPEG-2 video standard that are relevant to our discussion on compression of stereoscopic video, which is the main topic of this paper. Next, we outilne the various approaches for compression of stereoscopic video and then focus on compatible stereoscopic video coding using MPEG-2 Temporal scalability concepts. Compatible coding employing two different types of prediction structures become potentially possible, disparity compensated prediction and combined disparity and motion compensated predictions. To further improve coding performance and display quality, preprocessing for reducing mismatch between the two views forming stereoscopic video is considered. Results of simulations performed on stereoscopic video of normal TV resolution are then reported comparing the performance of two prediction structures with the simulcast solution. It is found that combined disparity and motion compensated prediction offers the best performance. Results indicate that compression of both views of stereoscopic video of normal TV resolution appears feasible in a total of 6 to 8 Mbit/s. We then discuss regarding multi-viewpoint video, a generalization of stereoscopic video. Finally, we describe ongoing efforts within MPEG-2 to define a profile for stereoscopic video coding, as well as, the promise of MPEG-4 in addressing coding of multi-viewpoint video.
Compression of stereoscopic video using MPEG-2
NASA Astrophysics Data System (ADS)
Puri, Atul; Kollarits, Richard V.; Haskell, Barry G.
1995-12-01
Many current as well as emerging applications in areas of entertainment, remote operations, manufacturing industry and medicine can benefit from the depth perception offered by stereoscopic video systems which employ two views of a scene imaged under the constraints imposed by human visual system. Among the many challenges to be overcome for practical realization and widespread use of 3D/stereoscopic systems are good 3D displays and efficient techniques for digital compression of enormous amounts of data while maintaining compatibility with normal video decoding and display systems. After a brief introduction to the basics of 3D/stereo including issues of depth perception, stereoscopic 3D displays and terminology in stereoscopic imaging and display, we present an overview of tools in the MPEG-2 video standard that are relevant to our discussion on compression of stereoscopic video, which is the main topic of this paper. Next, we outline the various approaches for compression of stereoscopic video and then focus on compatible stereoscopic video coding using MPEG-2 Temporal scalability concepts. Compatible coding employing two different types of prediction structures become potentially possible, disparity compensated prediction and combined disparity and motion compensated predictions. To further improve coding performance and display quality, preprocessing for reducing mismatch between the two views forming stereoscopic video is considered. Results of simulations performed on stereoscopic video of normal TV resolution are then reported comparing the performance of two prediction structures with the simulcast solution. It is found that combined disparity and motion compensated prediction offers the best performance. Results indicate that compression of both views of stereoscopic video of normal TV resolution appears feasible in a total of 6 to 8 Mbit/s. We then discuss regarding multi-viewpoint video, a generalization of stereoscopic video. Finally, we describe ongoing efforts within MPEG-2 to define a profile for stereoscopic video coding, as well as, the promise of MPEG-4 in addressing coding of multi-viewpoint video.
Bandwidth compression of color video signals. Ph.D. Thesis Final Report, 1 Oct. 1979 - 30 Sep. 1980
NASA Technical Reports Server (NTRS)
Schilling, D. L.
1980-01-01
The different encoder/decoder strategies to digitally encode video using an adaptive delta modulation are described. The techniques employed are: (1) separately encoding the R, G, and B components; (2) separately encoding the I, Y, and Q components; and (3) encoding the picture in a line sequential manner.
NASA Astrophysics Data System (ADS)
Kuehl, C. Stephen
1996-06-01
Video signal system performance can be compromised in a military aircraft cockpit management system (CMS) with the tailoring of vintage Electronics Industries Association (EIA) RS170 and RS343A video interface standards. Video analog interfaces degrade when induced system noise is present. Further signal degradation has been traditionally associated with signal data conversions between avionics sensor outputs and the cockpit display system. If the CMS engineering process is not carefully applied during the avionics video and computing architecture development, extensive and costly redesign will occur when visual sensor technology upgrades are incorporated. Close monitoring and technical involvement in video standards groups provides the knowledge-base necessary for avionic systems engineering organizations to architect adaptable and extendible cockpit management systems. With the Federal Communications Commission (FCC) in the process of adopting the Digital HDTV Grand Alliance System standard proposed by the Advanced Television Systems Committee (ATSC), the entertainment and telecommunications industries are adopting and supporting the emergence of new serial/parallel digital video interfaces and data compression standards that will drastically alter present NTSC-M video processing architectures. The re-engineering of the U.S. Broadcasting system must initially preserve the electronic equipment wiring networks within broadcast facilities to make the transition to HDTV affordable. International committee activities in technical forums like ITU-R (former CCIR), ANSI/SMPTE, IEEE, and ISO/IEC are establishing global consensus on video signal parameterizations that support a smooth transition from existing analog based broadcasting facilities to fully digital computerized systems. An opportunity exists for implementing these new video interface standards over existing video coax/triax cabling in military aircraft cockpit management systems. Reductions in signal conversion processing steps, major improvement in video noise reduction, and an added capability to pass audio/embedded digital data within the digital video signal stream are the significant performance increases associated with the incorporation of digital video interface standards. By analyzing the historical progression of military CMS developments, establishing a systems engineering process for CMS design, tracing the commercial evolution of video signal standardization, adopting commercial video signal terminology/definitions, and comparing/contrasting CMS architecture modifications using digital video interfaces; this paper provides a technical explanation on how a systems engineering process approach to video interface standardization can result in extendible and affordable cockpit management systems.
Documentation of surgical specimens using digital video technology.
Melín-Aldana, Héctor; Carter, Barbara; Sciortino, Debra
2006-09-01
Digital technology is commonly used for documentation of specimens in anatomic pathology and has been mainly limited to still photographs. Technologic innovations, such as digital video, provide additional, in some cases better, options for documentation. To demonstrate the applicability of digital video to the documentation of surgical specimens. A Canon Elura MC40 digital camcorder was used, and the unedited movies were transferred to a Macintosh PowerBook G4 computer. Both the camcorder and specimens were hand-held during filming. The movies were edited using the software iMovie. Annotations and histologic photographs may be easily incorporated into movies when editing, if desired. The finished movies are best viewed in computers which contain the free program QuickTime Player. Movies may also be incorporated onto DVDs, for viewing in standard DVD players or appropriately equipped computers. The final movies are on average 2 minutes in duration, with a file size between 2 and 400 megabytes, depending on the intended use. Because of file size, distribution is more practical via CD or DVD, but movies may be compressed for distribution through the Internet (e-mail, Web sites) or through internal hospital networks. Digital video is a practical, easy, and affordable methodology for specimen documentation, permitting a better 3-dimensional understanding of the specimens. Discussions with colleagues, student education, presentation at conferences, and other educational activities can be enhanced with the implementation of digital video technology.
Visually lossless compression of digital hologram sequences
NASA Astrophysics Data System (ADS)
Darakis, Emmanouil; Kowiel, Marcin; Näsänen, Risto; Naughton, Thomas J.
2010-01-01
Digital hologram sequences have great potential for the recording of 3D scenes of moving macroscopic objects as their numerical reconstruction can yield a range of perspective views of the scene. Digital holograms inherently have large information content and lossless coding of holographic data is rather inefficient due to the speckled nature of the interference fringes they contain. Lossy coding of still holograms and hologram sequences has shown promising results. By definition, lossy compression introduces errors in the reconstruction. In all of the previous studies, numerical metrics were used to measure the compression error and through it, the coding quality. Digital hologram reconstructions are highly speckled and the speckle pattern is very sensitive to data changes. Hence, numerical quality metrics can be misleading. For example, for low compression ratios, a numerically significant coding error can have visually negligible effects. Yet, in several cases, it is of high interest to know how much lossy compression can be achieved, while maintaining the reconstruction quality at visually lossless levels. Using an experimental threshold estimation method, the staircase algorithm, we determined the highest compression ratio that was not perceptible to human observers for objects compressed with Dirac and MPEG-4 compression methods. This level of compression can be regarded as the point below which compression is perceptually lossless although physically the compression is lossy. It was found that up to 4 to 7.5 fold compression can be obtained with the above methods without any perceptible change in the appearance of video sequences.
Korycki, Rafal
2014-05-01
Since the appearance of digital audio recordings, audio authentication has been becoming increasingly difficult. The currently available technologies and free editing software allow a forger to cut or paste any single word without audible artifacts. Nowadays, the only method referring to digital audio files commonly approved by forensic experts is the ENF criterion. It consists in fluctuation analysis of the mains frequency induced in electronic circuits of recording devices. Therefore, its effectiveness is strictly dependent on the presence of mains signal in the recording, which is a rare occurrence. Recently, much attention has been paid to authenticity analysis of compressed multimedia files and several solutions were proposed for detection of double compression in both digital video and digital audio. This paper addresses the problem of tampering detection in compressed audio files and discusses new methods that can be used for authenticity analysis of digital recordings. Presented approaches consist in evaluation of statistical features extracted from the MDCT coefficients as well as other parameters that may be obtained from compressed audio files. Calculated feature vectors are used for training selected machine learning algorithms. The detection of multiple compression covers up tampering activities as well as identification of traces of montage in digital audio recordings. To enhance the methods' robustness an encoder identification algorithm was developed and applied based on analysis of inherent parameters of compression. The effectiveness of tampering detection algorithms is tested on a predefined large music database consisting of nearly one million of compressed audio files. The influence of compression algorithms' parameters on the classification performance is discussed, based on the results of the current study. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Assessing a nephrology-focused YouTube channel's potential to educate health care providers.
Desai, Tejas; Sanghani, Vivek; Fang, Xiangming; Christiano, Cynthia; Ferris, Maria
2013-01-01
YouTube has emerged as a potential teaching tool. Studies of the teaching potential of YouTube videos have not addressed health care provider (HCP) satisfaction; a necessary prerequisite for any teaching tool. We conducted a 4-month investigation to determine HCP satisfaction with a nephrology-specific YouTube channel. The Nephrology On-Demand YouTube channel was analyzed from January 1 through April 30, 2011. Sixty-minute nephrology lectures at East Carolina University were compressed into 10-minute videos and uploaded to the channel. HCPs were asked to answer a 5-point Likert questionnaire regarding the accuracy, currency, objectivity and usefulness of the digital format of the teaching videos. Means, standard deviations and 2-sided chi-square testing were performed to analyze responses. Over 80% of HCPs considered the YouTube channel to be accurate, current and objective. A similar percentage considered the digital format useful despite the compression of videos and lack of audio. The nephrology-specific YouTube channel has the potential to educate HCPs of various training backgrounds. Additional studies are required to determine if such specialty-specific channels can improve knowledge acquisition and retention.
Identifying sports videos using replay, text, and camera motion features
NASA Astrophysics Data System (ADS)
Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.
1999-12-01
Automated classification of digital video is emerging as an important piece of the puzzle in the design of content management systems for digital libraries. The ability to classify videos into various classes such as sports, news, movies, or documentaries, increases the efficiency of indexing, browsing, and retrieval of video in large databases. In this paper, we discuss the extraction of features that enable identification of sports videos directly from the compressed domain of MPEG video. These features include detecting the presence of action replays, determining the amount of scene text in vide, and calculating various statistics on camera and/or object motion. The features are derived from the macroblock, motion,and bit-rate information that is readily accessible from MPEG video with very minimal decoding, leading to substantial gains in processing speeds. Full-decoding of selective frames is required only for text analysis. A decision tree classifier built using these features is able to identify sports clips with an accuracy of about 93 percent.
Bandwidth characteristics of multimedia data traffic on a local area network
NASA Technical Reports Server (NTRS)
Chuang, Shery L.; Doubek, Sharon; Haines, Richard F.
1993-01-01
Limited spacecraft communication links call for users to investigate the potential use of video compression and multimedia technologies to optimize bandwidth allocations. The objective was to determine the transmission characteristics of multimedia data - motion video, text or bitmap graphics, and files transmitted independently and simultaneously over an ethernet local area network. Commercial desktop video teleconferencing hardware and software and Intel's proprietary Digital Video Interactive (DVI) video compression algorithm were used, and typical task scenarios were selected. The transmission time, packet size, number of packets, and network utilization of the data were recorded. Each data type - compressed motion video, text and/or bitmapped graphics, and a compressed image file - was first transmitted independently and its characteristics recorded. The results showed that an average bandwidth of 7.4 kilobits per second (kbps) was used to transmit graphics; an average bandwidth of 86.8 kbps was used to transmit an 18.9-kilobyte (kB) image file; a bandwidth of 728.9 kbps was used to transmit compressed motion video at 15 frames per second (fps); and a bandwidth of 75.9 kbps was used to transmit compressed motion video at 1.5 fps. Average packet sizes were 933 bytes for graphics, 498.5 bytes for the image file, 345.8 bytes for motion video at 15 fps, and 341.9 bytes for motion video at 1.5 fps. Simultaneous transmission of multimedia data types was also characterized. The multimedia packets used transmission bandwidths of 341.4 kbps and 105.8kbps. Bandwidth utilization varied according to the frame rate (frames per second) setting for the transmission of motion video. Packet size did not vary significantly between the data types. When these characteristics are applied to Space Station Freedom (SSF), the packet sizes fall within the maximum specified by the Consultative Committee for Space Data Systems (CCSDS). The uplink of imagery to SSF may be performed at minimal frame rates and/or within seconds of delay, depending on the user's allocated bandwidth. Further research to identify the acceptable delay interval and its impact on human performance is required. Additional studies in network performance using various video compression algorithms and integrated multimedia techniques are needed to determine the optimal design approach for utilizing SSF's data communications system.
Spatial-temporal distortion metric for in-service quality monitoring of any digital video system
NASA Astrophysics Data System (ADS)
Wolf, Stephen; Pinson, Margaret H.
1999-11-01
Many organizations have focused on developing digital video quality metrics which produce results that accurately emulate subjective responses. However, to be widely applicable a metric must also work over a wide range of quality, and be useful for in-service quality monitoring. The Institute for Telecommunication Sciences (ITS) has developed spatial-temporal distortion metrics that meet all of these requirements. These objective metrics are described in detail and have a number of interesting properties, including utilization of (1) spatial activity filters which emphasize long edges on the order of 10 arc min while simultaneously performing large amounts of noise suppression, (2) the angular direction of the spatial gradient, (3) spatial-temporal compression factors of at least 384:1 (spatial compression of at least 64:1 and temporal compression of at least 6:1, and 4) simple perceptibility thresholds and spatial-temporal masking functions. Results are presented that compare the objective metric values with mean opinion scores from a wide range of subjective data bases spanning many different scenes, systems, bit-rates, and applications.
Video data compression using artificial neural network differential vector quantization
NASA Technical Reports Server (NTRS)
Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.
1991-01-01
An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.
Thermal imagers: from ancient analog video output to state-of-the-art video streaming
NASA Astrophysics Data System (ADS)
Haan, Hubertus; Feuchter, Timo; Münzberg, Mario; Fritze, Jörg; Schlemmer, Harry
2013-06-01
The video output of thermal imagers stayed constant over almost two decades. When the famous Common Modules were employed a thermal image at first was presented to the observer in the eye piece only. In the early 1990s TV cameras were attached and the standard output was CCIR. In the civil camera market output standards changed to digital formats a decade ago with digital video streaming being nowadays state-of-the-art. The reasons why the output technique in the thermal world stayed unchanged over such a long time are: the very conservative view of the military community, long planning and turn-around times of programs and a slower growth of pixel number of TIs in comparison to consumer cameras. With megapixel detectors the CCIR output format is not sufficient any longer. The paper discusses the state-of-the-art compression and streaming solutions for TIs.
Digital video technologies and their network requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. P. Tsang; H. Y. Chen; J. M. Brandt
1999-11-01
Coded digital video signals are considered to be one of the most difficult data types to transport due to their real-time requirements and high bit rate variability. In this study, the authors discuss the coding mechanisms incorporated by the major compression standards bodies, i.e., JPEG and MPEG, as well as more advanced coding mechanisms such as wavelet and fractal techniques. The relationship between the applications which use these coding schemes and their network requirements are the major focus of this study. Specifically, the authors relate network latency, channel transmission reliability, random access speed, buffering and network bandwidth with the variousmore » coding techniques as a function of the applications which use them. Such applications include High-Definition Television, Video Conferencing, Computer-Supported Collaborative Work (CSCW), and Medical Imaging.« less
[Development of a video image system for wireless capsule endoscopes based on DSP].
Yang, Li; Peng, Chenglin; Wu, Huafeng; Zhao, Dechun; Zhang, Jinhua
2008-02-01
A video image recorder to record video picture for wireless capsule endoscopes was designed. TMS320C6211 DSP of Texas Instruments Inc. is the core processor of this system. Images are periodically acquired from Composite Video Broadcast Signal (CVBS) source and scaled by video decoder (SAA7114H). Video data is transported from high speed buffer First-in First-out (FIFO) to Digital Signal Processor (DSP) under the control of Complex Programmable Logic Device (CPLD). This paper adopts JPEG algorithm for image coding, and the compressed data in DSP was stored to Compact Flash (CF) card. TMS320C6211 DSP is mainly used for image compression and data transporting. Fast Discrete Cosine Transform (DCT) algorithm and fast coefficient quantization algorithm are used to accelerate operation speed of DSP and decrease the executing code. At the same time, proper address is assigned for each memory, which has different speed;the memory structure is also optimized. In addition, this system uses plenty of Extended Direct Memory Access (EDMA) to transport and process image data, which results in stable and high performance.
Barbier, Paolo; Alimento, Marina; Berna, Giovanni; Cavoretto, Dario; Celeste, Fabrizio; Muratori, Manuela; Guazzi, Maurizio D
2004-01-01
Tele-echocardiography is not widely used because of lengthy transmission times when using standard Motion Pictures Expert Groups (MPEG)-2 lossy compression algorythms, unless expensive high bandwidth lines are used. We sought to validate the newer MPEG-4 algorythms to allow further reduction in echocardiographic motion video file size. Four cardiologists expert in echocardiography read blindly 165 randomized uncompressed and compressed 2D and color Doppler normal and pathologic motion images. One Digital Video and 3 MPEG-4 compression algorythms were tested, the latter at 3 decreasing compression quality levels (100%, 65% and 40%). Mean diagnostic and image quality scores were computed for each file and compared across the 3 compression levels using uncompressed files as controls. File dimensions decreased from a range of uncompressed 12-83 MB to MPEG-4 0.03-2.3 MB. All algorythms showed mean scores that were not significantly different from uncompressed source, except the MPEG-4 DivX algorythm at the highest selected compression (40%, p=.002). These data support the use of MPEG-4 compression to reduce echocardiographic motion image size for transmission purposes, allowing cost reduction through use of low bandwidth lines.
High-quality and small-capacity e-learning video featuring lecturer-superimposing PC screen images
NASA Astrophysics Data System (ADS)
Nomura, Yoshihiko; Murakami, Michinobu; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko
2006-10-01
Information processing and communication technology are progressing quickly, and are prevailing throughout various technological fields. Therefore, the development of such technology should respond to the needs for improvement of quality in the e-learning education system. The authors propose a new video-image compression processing system that ingeniously employs the features of the lecturing scene. While dynamic lecturing scene is shot by a digital video camera, screen images are electronically stored by a PC screen image capturing software in relatively long period at a practical class. Then, a lecturer and a lecture stick are extracted from the digital video images by pattern recognition techniques, and the extracted images are superimposed on the appropriate PC screen images by off-line processing. Thus, we have succeeded to create a high-quality and small-capacity (HQ/SC) video-on-demand educational content featuring the advantages: the high quality of image sharpness, the small electronic file capacity, and the realistic lecturer motion.
The effects of video compression on acceptability of images for monitoring life sciences experiments
NASA Astrophysics Data System (ADS)
Haines, Richard F.; Chuang, Sherry L.
1992-07-01
Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters according to scientific discipline and experiment type is critical to the success of remote experiments.
The effects of video compression on acceptability of images for monitoring life sciences experiments
NASA Technical Reports Server (NTRS)
Haines, Richard F.; Chuang, Sherry L.
1992-01-01
Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters according to scientific discipline and experiment type is critical to the success of remote experiments.
Data compression/error correction digital test system. Appendix 2: Theory of operation
NASA Technical Reports Server (NTRS)
1972-01-01
An overall block diagram of the DC/EC digital system test is shown. The system is divided into two major units: the transmitter and the receiver. In operation, the transmitter and receiver are connected only by a real or simulated transmission link. The system inputs consist of: (1) standard format TV video, (2) two channels of analog voice, and (3) one serial PCM bit stream.
Evaluation of architectures for an ASP MPEG-4 decoder using a system-level design methodology
NASA Astrophysics Data System (ADS)
Garcia, Luz; Reyes, Victor; Barreto, Dacil; Marrero, Gustavo; Bautista, Tomas; Nunez, Antonio
2005-06-01
Trends in multimedia consumer electronics, digital video and audio, aim to reach users through low-cost mobile devices connected to data broadcasting networks with limited bandwidth. An emergent broadcasting network is the digital audio broadcasting network (DAB) which provides CD quality audio transmission together with robustness and efficiency techniques to allow good quality reception in motion conditions. This paper focuses on the system-level evaluation of different architectural options to allow low bandwidth digital video reception over DAB, based on video compression techniques. Profiling and design space exploration techniques are applied over the ASP MPEG-4 decoder in order to find out the best HW/SW partition given the application and platform constraints. An innovative SystemC-based system-level design tool, called CASSE, is being used for modelling, exploration and evaluation of different ASP MPEG-4 decoder HW/SW partitions. System-level trade offs and quantitative data derived from this analysis are also presented in this work.
Wavelet-based reversible watermarking for authentication
NASA Astrophysics Data System (ADS)
Tian, Jun
2002-04-01
In the digital information age, digital content (audio, image, and video) can be easily copied, manipulated, and distributed. Copyright protection and content authentication of digital content has become an urgent problem to content owners and distributors. Digital watermarking has provided a valuable solution to this problem. Based on its application scenario, most digital watermarking methods can be divided into two categories: robust watermarking and fragile watermarking. As a special subset of fragile watermark, reversible watermark (which is also called lossless watermark, invertible watermark, erasable watermark) enables the recovery of the original, unwatermarked content after the watermarked content has been detected to be authentic. Such reversibility to get back unwatermarked content is highly desired in sensitive imagery, such as military data and medical data. In this paper we present a reversible watermarking method based on an integer wavelet transform. We look into the binary representation of each wavelet coefficient and embed an extra bit to expandable wavelet coefficient. The location map of all expanded coefficients will be coded by JBIG2 compression and these coefficient values will be losslessly compressed by arithmetic coding. Besides these two compressed bit streams, an SHA-256 hash of the original image will also be embedded for authentication purpose.
Comparison of MPEG digital video with super VHS tape for diagnostic echocardiographic readings
NASA Technical Reports Server (NTRS)
Soble, J. S.; Yurow, G.; Brar, R.; Stamos, T.; Neumann, A.; Garcia, M.; Stoddard, M. F.; Cherian, P. K.; Bhamb, B.; Thomas, J. D.
1998-01-01
BACKGROUND: Digital recording of echocardiographic studies is on the clinical horizon. However, full digital capture of complete echocardiographic studies in traditional video format is impractical, given current storage capacity and network bandwidth. To overcome these constraints, we evaluated the diagnostic image quality of digital video by using MPEG (Motion Picture Experts Group) compression. METHODS AND RESULTS: Fifty-eight complete, consecutive studies were recorded simultaneously with the use of MPEG-1 and sVHS videotape. Each matched MPEG and sVHS study pair was reviewed by two from a total of six readers, and findings were recorded with the use of a detailed, computerized reporting tool. Intrareader and interreader discrepancies were characterized as major or minor and analyzed in total and for specific subgroups of findings (left and right ventricular parameters, valvular insufficiency, and left ventricular regional wall motion). Intrareader discrepancies were reviewed by a consensus panel for agreement with either MPEG or sVHS findings. There was an exact concordance between MPEG and sVHS readings in 83% of findings. The majority of discrepancies were minor, with major discrepancies in only 2.7% of findings. There was no difference in the rate of consensus panel agreement with MPEG or sVHS for instances of intrareader discrepancy, either in total or for any subgroup of findings. Interreader discrepancy rates were nearly identical for both MPEG and sVHS. CONCLUSIONS: MPEG-1 digital video is equivalent to sVHS videotape for diagnostic echocardiography. MPEG increases the range of practical options for digital echocardiography and offers, for the first time, the advantages of digital recording in a familiar video format.
A real-time inverse quantised transform for multi-standard with dynamic resolution support
NASA Astrophysics Data System (ADS)
Sun, Chi-Chia; Lin, Chun-Ying; Zhang, Ce
2016-06-01
In this paper, a real-time configurable intelligent property (IP) core is presented for image/video decoding process in compatibility with the standard MPEG-4 Visual and the standard H.264/AVC. The inverse quantised discrete cosine and integer transform can be used to perform inverse quantised discrete cosine transform and inverse quantised inverse integer transforms which only required shift and add operations. Meanwhile, COordinate Rotation DIgital Computer iterations and compensation steps are adjustable in order to compensate for the video compression quality regarding various data throughput. The implementations are embedded in publicly available software XVID Codes 1.2.2 for the standard MPEG-4 Visual and the H.264/AVC reference software JM 16.1, where the experimental results show that the balance between the computational complexity and video compression quality is retained. At the end, FPGA synthesised results show that the proposed IP core can bring advantages to low hardware costs and also provide real-time performance for Full HD and 4K-2K video decoding.
Human Motion Capture Data Tailored Transform Coding.
Junhui Hou; Lap-Pui Chau; Magnenat-Thalmann, Nadia; Ying He
2015-07-01
Human motion capture (mocap) is a widely used technique for digitalizing human movements. With growing usage, compressing mocap data has received increasing attention, since compact data size enables efficient storage and transmission. Our analysis shows that mocap data have some unique characteristics that distinguish themselves from images and videos. Therefore, directly borrowing image or video compression techniques, such as discrete cosine transform, does not work well. In this paper, we propose a novel mocap-tailored transform coding algorithm that takes advantage of these features. Our algorithm segments the input mocap sequences into clips, which are represented in 2D matrices. Then it computes a set of data-dependent orthogonal bases to transform the matrices to frequency domain, in which the transform coefficients have significantly less dependency. Finally, the compression is obtained by entropy coding of the quantized coefficients and the bases. Our method has low computational cost and can be easily extended to compress mocap databases. It also requires neither training nor complicated parameter setting. Experimental results demonstrate that the proposed scheme significantly outperforms state-of-the-art algorithms in terms of compression performance and speed.
The Effectiveness of Low-Cost Tele-Lecturing.
ERIC Educational Resources Information Center
Muta, Hiromitsu; Kikuta, Reiko; Hamano, Takashi; Maesako, Takanori
1997-01-01
Compares distance education using PictureTel, a compressed-digital-video system via telephone lines (audio and visual interactive communication) in terms of its costs and effectiveness with traditional in-class education. Costing less than half the traditional approach, the study suggested distance education would be economical if used frequently.…
Detection and localization of copy-paste forgeries in digital videos.
Singh, Raahat Devender; Aggarwal, Naveen
2017-12-01
Amidst the continual march of technology, we find ourselves relying on digital videos to proffer visual evidence in several highly sensitive areas such as journalism, politics, civil and criminal litigation, and military and intelligence operations. However, despite being an indispensable source of information with high evidentiary value, digital videos are also extremely vulnerable to conscious manipulations. Therefore, in a situation where dependence on video evidence is unavoidable, it becomes crucial to authenticate the contents of this evidence before accepting them as an accurate depiction of reality. Digital videos can suffer from several kinds of manipulations, but perhaps, one of the most consequential forgeries is copy-paste forgery, which involves insertion/removal of objects into/from video frames. Copy-paste forgeries alter the information presented by the video scene, which has a direct effect on our basic understanding of what that scene represents, and so, from a forensic standpoint, the challenge of detecting such forgeries is especially significant. In this paper, we propose a sensor pattern noise based copy-paste detection scheme, which is an improved and forensically stronger version of an existing noise-residue based technique. We also study a demosaicing artifact based image forensic scheme to estimate the extent of its viability in the domain of video forensics. Furthermore, we suggest a simplistic clustering technique for the detection of copy-paste forgeries, and determine if it possess the capabilities desired of a viable and efficacious video forensic scheme. Finally, we validate these schemes on a set of realistically tampered MJPEG, MPEG-2, MPEG-4, and H.264/AVC encoded videos in a diverse experimental set-up by varying the strength of post-production re-compressions and transcodings, bitrates, and sizes of the tampered regions. Such an experimental set-up is representative of a neutral testing platform and simulates a real-world forgery scenario where the forensic investigator has no control over any of the variable parameters of the tampering process. When tested in such an experimental set-up, the four forensic schemes achieved varying levels of detection accuracies and exhibited different scopes of applicabilities. For videos compressed using QFs in the range 70-100, the existing noise residue based technique generated average detection accuracy in the range 64.5%-82.0%, while the proposed sensor pattern noise based scheme generated average accuracy in the range 89.9%-98.7%. For the aforementioned range of QFs, average accuracy rates achieved by the suggested clustering technique and the demosaicing artifact based approach were in the range 79.1%-90.1% and 83.2%-93.3%, respectively. Copyright © 2017 Elsevier B.V. All rights reserved.
A perioperative echocardiographic reporting and recording system.
Pybus, David A
2004-11-01
Advances in video capture, compression, and streaming technology, coupled with improvements in central processing unit design and the inclusion of a database engine in the Windows operating system, have simplified the task of implementing a digital echocardiographic recording system. I describe an application that uses these technologies and runs on a notebook computer.
First- and third-party ground truth for key frame extraction from consumer video clips
NASA Astrophysics Data System (ADS)
Costello, Kathleen; Luo, Jiebo
2007-02-01
Extracting key frames (KF) from video is of great interest in many applications, such as video summary, video organization, video compression, and prints from video. KF extraction is not a new problem. However, current literature has been focused mainly on sports or news video. In the consumer video space, the biggest challenges for key frame selection from consumer videos are the unconstrained content and lack of any preimposed structure. In this study, we conduct ground truth collection of key frames from video clips taken by digital cameras (as opposed to camcorders) using both first- and third-party judges. The goals of this study are: (1) to create a reference database of video clips reasonably representative of the consumer video space; (2) to identify associated key frames by which automated algorithms can be compared and judged for effectiveness; and (3) to uncover the criteria used by both first- and thirdparty human judges so these criteria can influence algorithm design. The findings from these ground truths will be discussed.
Method and Apparatus for Evaluating the Visual Quality of Processed Digital Video Sequences
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
2002-01-01
A Digital Video Quality (DVQ) apparatus and method that incorporate a model of human visual sensitivity to predict the visibility of artifacts. The DVQ method and apparatus are used for the evaluation of the visual quality of processed digital video sequences and for adaptively controlling the bit rate of the processed digital video sequences without compromising the visual quality. The DVQ apparatus minimizes the required amount of memory and computation. The input to the DVQ apparatus is a pair of color image sequences: an original (R) non-compressed sequence, and a processed (T) sequence. Both sequences (R) and (T) are sampled, cropped, and subjected to color transformations. The sequences are then subjected to blocking and discrete cosine transformation, and the results are transformed to local contrast. The next step is a time filtering operation which implements the human sensitivity to different time frequencies. The results are converted to threshold units by dividing each discrete cosine transform coefficient by its respective visual threshold. At the next stage the two sequences are subtracted to produce an error sequence. The error sequence is subjected to a contrast masking operation, which also depends upon the reference sequence (R). The masked errors can be pooled in various ways to illustrate the perceptual error over various dimensions, and the pooled error can be converted to a visual quality measure.
Watermarking textures in video games
NASA Astrophysics Data System (ADS)
Liu, Huajian; Berchtold, Waldemar; Schäfer, Marcel; Lieb, Patrick; Steinebach, Martin
2014-02-01
Digital watermarking is a promising solution to video game piracy. In this paper, based on the analysis of special challenges and requirements in terms of watermarking textures in video games, a novel watermarking scheme for DDS textures in video games is proposed. To meet the performance requirements in video game applications, the proposed algorithm embeds the watermark message directly in the compressed stream in DDS files and can be straightforwardly applied in watermark container technique for real-time embedding. Furthermore, the embedding approach achieves high watermark payload to handle collusion secure fingerprinting codes with extreme length. Hence, the scheme is resistant to collusion attacks, which is indispensable in video game applications. The proposed scheme is evaluated in aspects of transparency, robustness, security and performance. Especially, in addition to classical objective evaluation, the visual quality and playing experience of watermarked games is assessed subjectively in game playing.
High performance MPEG-audio decoder IC
NASA Technical Reports Server (NTRS)
Thorn, M.; Benbassat, G.; Cyr, K.; Li, S.; Gill, M.; Kam, D.; Walker, K.; Look, P.; Eldridge, C.; Ng, P.
1993-01-01
The emerging digital audio and video compression technology brings both an opportunity and a new challenge to IC design. The pervasive application of compression technology to consumer electronics will require high volume, low cost IC's and fast time to market of the prototypes and production units. At the same time, the algorithms used in the compression technology result in complex VLSI IC's. The conflicting challenges of algorithm complexity, low cost, and fast time to market have an impact on device architecture and design methodology. The work presented in this paper is about the design of a dedicated, high precision, Motion Picture Expert Group (MPEG) audio decoder.
Consumer-based technology for distribution of surgical videos for objective evaluation.
Gonzalez, Ray; Martinez, Jose M; Lo Menzo, Emanuele; Iglesias, Alberto R; Ro, Charles Y; Madan, Atul K
2012-08-01
The Global Operative Assessment of Laparoscopic Skill (GOALS) is one validated metric utilized to grade laparoscopic skills and has been utilized to score recorded operative videos. To facilitate easier viewing of these recorded videos, we are developing novel techniques to enable surgeons to view these videos. The objective of this study is to determine the feasibility of utilizing widespread current consumer-based technology to assist in distributing appropriate videos for objective evaluation. Videos from residents were recorded via a direct connection from the camera processor via an S-video output via a cable into a hub to connect to a standard laptop computer via a universal serial bus (USB) port. A standard consumer-based video editing program was utilized to capture the video and record in appropriate format. We utilized mp4 format, and depending on the size of the file, the videos were scaled down (compressed), their format changed (using a standard video editing program), or sliced into multiple videos. Standard available consumer-based programs were utilized to convert the video into a more appropriate format for handheld personal digital assistants. In addition, the videos were uploaded to a social networking website and video sharing websites. Recorded cases of laparoscopic cholecystectomy in a porcine model were utilized. Compression was required for all formats. All formats were accessed from home computers, work computers, and iPhones without difficulty. Qualitative analyses by four surgeons demonstrated appropriate quality to grade for these formats. Our preliminary results show promise that, utilizing consumer-based technology, videos can be easily distributed to surgeons to grade via GOALS via various methods. Easy accessibility may help make evaluation of resident videos less complicated and cumbersome.
Analysis-Preserving Video Microscopy Compression via Correlation and Mathematical Morphology
Shao, Chong; Zhong, Alfred; Cribb, Jeremy; Osborne, Lukas D.; O’Brien, E. Timothy; Superfine, Richard; Mayer-Patel, Ketan; Taylor, Russell M.
2015-01-01
The large amount video data produced by multi-channel, high-resolution microscopy system drives the need for a new high-performance domain-specific video compression technique. We describe a novel compression method for video microscopy data. The method is based on Pearson's correlation and mathematical morphology. The method makes use of the point-spread function (PSF) in the microscopy video acquisition phase. We compare our method to other lossless compression methods and to lossy JPEG, JPEG2000 and H.264 compression for various kinds of video microscopy data including fluorescence video and brightfield video. We find that for certain data sets, the new method compresses much better than lossless compression with no impact on analysis results. It achieved a best compressed size of 0.77% of the original size, 25× smaller than the best lossless technique (which yields 20% for the same video). The compressed size scales with the video's scientific data content. Further testing showed that existing lossy algorithms greatly impacted data analysis at similar compression sizes. PMID:26435032
HVS-based quantization steps for validation of digital cinema extended bitrates
NASA Astrophysics Data System (ADS)
Larabi, M.-C.; Pellegrin, P.; Anciaux, G.; Devaux, F.-O.; Tulet, O.; Macq, B.; Fernandez, C.
2009-02-01
In Digital Cinema, the video compression must be as transparent as possible to provide the best image quality to the audience. The goal of compression is to simplify transport, storing, distribution and projection of films. For all those tasks, equipments need to be developed. It is thus mandatory to reduce the complexity of the equipments by imposing limitations in the specifications. In this sense, the DCI has fixed the maximum bitrate for a compressed stream to 250 Mbps independently from the input format (4K/24fps, 2K/48fps or 2K/24fps). The work described in this paper This parameter is discussed in this paper because it is not consistent to double/quadruple the input rate without increasing the output rate. The work presented in this paper is intended to define quantization steps ensuring the visually lossless compression. Two steps are followed first to evaluate the effect of each subband separately and then to fin the scaling ratio. The obtained results show that it is necessary to increase the bitrate limit for cinema material in order to achieve the visually lossless.
Study on a High Compression Processing for Video-on-Demand e-learning System
NASA Astrophysics Data System (ADS)
Nomura, Yoshihiko; Matsuda, Ryutaro; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko
The authors proposed a high-quality and small-capacity lecture-video-file creating system for distance e-learning system. Examining the feature of the lecturing scene, the authors ingeniously employ two kinds of image-capturing equipment having complementary characteristics : one is a digital video camera with a low resolution and a high frame rate, and the other is a digital still camera with a high resolution and a very low frame rate. By managing the two kinds of image-capturing equipment, and by integrating them with image processing, we can produce course materials with the greatly reduced file capacity : the course materials satisfy the requirements both for the temporal resolution to see the lecturer's point-indicating actions and for the high spatial resolution to read the small written letters. As a result of a comparative experiment, the e-lecture using the proposed system was confirmed to be more effective than an ordinary lecture from the viewpoint of educational effect.
Video multiple watermarking technique based on image interlacing using DWT.
Ibrahim, Mohamed M; Abdel Kader, Neamat S; Zorkany, M
2014-01-01
Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT) is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video) are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth.
Wavelet-based audio embedding and audio/video compression
NASA Astrophysics Data System (ADS)
Mendenhall, Michael J.; Claypoole, Roger L., Jr.
2001-12-01
Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.
An image compression algorithm for a high-resolution digital still camera
NASA Technical Reports Server (NTRS)
Nerheim, Rosalee
1989-01-01
The Electronic Still Camera (ESC) project will provide for the capture and transmission of high-quality images without the use of film. The image quality will be superior to video and will approach the quality of 35mm film. The camera, which will have the same general shape and handling as a 35mm camera, will be able to send images to earth in near real-time. Images will be stored in computer memory (RAM) in removable cartridges readable by a computer. To save storage space, the image will be compressed and reconstructed at the time of viewing. Both lossless and loss-y image compression algorithms are studied, described, and compared.
Development of the SEASIS instrument for SEDSAT
NASA Technical Reports Server (NTRS)
Maier, Mark W.
1996-01-01
Two SEASIS experiment objectives are key: take images that allow three axis attitude determination and take multi-spectral images of the earth. During the tether mission it is also desirable to capture images for the recoiling tether from the endmass perspective (which has never been observed). SEASIS must store all its imagery taken during the tether mission until the earth downlink can be established. SEASIS determines attitude with a panoramic camera and performs earth observation with a telephoto lens camera. Camera video is digitized, compressed, and stored in solid state memory. These objectives are addressed through the following architectural choices: (1) A camera system using a Panoramic Annular Lens (PAL). This lens has a 360 deg. azimuthal field of view by a +45 degree vertical field measured from a plan normal to the lens boresight axis. It has been shown in Mr. Mark Steadham's UAH M.S. thesis that his camera can determine three axis attitude anytime the earth and one other recognizable celestial object (for example, the sun) is in the field of view. This will be essentially all the time during tether deployment. (2) A second camera system using telephoto lens and filter wheel. The camera is a black and white standard video camera. The filters are chosen to cover the visible spectral bands of remote sensing interest. (3) A processor and mass memory arrangement linked to the cameras. Video signals from the cameras are digitized, compressed in the processor, and stored in a large static RAM bank. The processor is a multi-chip module consisting of a T800 Transputer and three Zoran floating point Digital Signal Processors. This processor module was supplied under ARPA contract by the Space Computer Corporation to demonstrate its use in space.
NASA Technical Reports Server (NTRS)
Ivancic, William D.
1998-01-01
Various issues associated with satellite/terrestrial end-to-end communication interoperability are presented in viewgraph form. Specific topics include: 1) Quality of service; 2) ATM performance characteristics; 3) MPEG-2 transport stream mapping to AAL-5; 4) Observation and discussion of compressed video tests over ATM; 5) Digital video over satellites status; 6) Satellite link configurations; 7) MPEG-2 over ATM with binomial errors; 8) MPEG-2 over ATM channel characteristics; 8) MPEG-2 over ATM over emulated satellites; 9) MPEG-2 transport stream with errors; and a 10) Dual decoder test.
Design of an H.264/SVC resilient watermarking scheme
NASA Astrophysics Data System (ADS)
Van Caenegem, Robrecht; Dooms, Ann; Barbarien, Joeri; Schelkens, Peter
2010-01-01
The rapid dissemination of media technologies has lead to an increase of unauthorized copying and distribution of digital media. Digital watermarking, i.e. embedding information in the multimedia signal in a robust and imperceptible manner, can tackle this problem. Recently, there has been a huge growth in the number of different terminals and connections that can be used to consume multimedia. To tackle the resulting distribution challenges, scalable coding is often employed. Scalable coding allows the adaptation of a single bit-stream to varying terminal and transmission characteristics. As a result of this evolution, watermarking techniques that are robust against scalable compression become essential in order to control illegal copying. In this paper, a watermarking technique resilient against scalable video compression using the state-of-the-art H.264/SVC codec is therefore proposed and evaluated.
Experimental service of 3DTV broadcasting relay in Korea
NASA Astrophysics Data System (ADS)
Hur, Namho; Ahn, Chung-Hyun; Ahn, Chieteuk
2002-11-01
This paper introduces 3D HDTV relay broadcasting experiments of 2002 FIFA World Cup Korea/Japan using a terrestrial and satellite network. We have developed 3D HDTV cameras, 3D HDTV video multiplexer/demultiplexer, a 3D HDTV receiver, and a 3D HDTV OB van for field productions. By using a terrestrial and satellite network, we distributed a compressed 3D HDTV signal to predetermined demonstration venues which are approved by host broadcast services (HBS), KirchMedia, and FIFA. In this case, we transmitted a 40Mbps MPEG-2 transport stream (DVB-ASI) over a DS-3 network specified in ITU-T Rec. G.703. The video/audio compression formats are MPEG-2 main-profile, high-level and Dolby Digital AC-3 respectively. Then at venues, the recovered left and right images by the 3D HDTV receiver are displayed on a screen with polarized beam projectors.
FBCOT: a fast block coding option for JPEG 2000
NASA Astrophysics Data System (ADS)
Taubman, David; Naman, Aous; Mathew, Reji
2017-09-01
Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically < 0.5dB). The algorithm provides only limited quality/SNR scalability, but offers truly reversible transcoding to/from any standard JPEG 2000 block bit-stream. The proposed FAST block coder can be used with EBCOT's post-compression RD-optimization methodology, allowing a target compressed bit-rate to be achieved even at low latencies, leading to the name FBCOT (Fast Block Coding with Optimized Truncation).
Digital cinema system using JPEG2000 movie of 8-million pixel resolution
NASA Astrophysics Data System (ADS)
Fujii, Tatsuya; Nomura, Mitsuru; Shirai, Daisuke; Yamaguchi, Takahiro; Fujii, Tetsuro; Ono, Sadayasu
2003-05-01
We have developed a prototype digital cinema system that can store, transmit and display extra high quality movies of 8-million pixel resolution, using JPEG2000 coding algorithm. The image quality is 4 times better than HDTV in resolution, and enables us to replace conventional films with digital cinema archives. Using wide-area optical gigabit IP networks, cinema contents are distributed and played back as a video-on-demand (VoD) system. The system consists of three main devices, a video server, a real-time JPEG2000 decoder, and a large-venue LCD projector. All digital movie data are compressed by JPEG2000 and stored in advance. The coded streams of 300~500 Mbps can be continuously transmitted from the PC server using TCP/IP. The decoder can perform the real-time decompression at 24/48 frames per second, using 120 parallel JPEG2000 processing elements. The received streams are expanded into 4.5Gbps raw video signals. The prototype LCD projector uses 3 pieces of 3840×2048 pixel reflective LCD panels (D-ILA) to show RGB 30-bit color movies fed by the decoder. The brightness exceeds 3000 ANSI lumens for a 300-inch screen. The refresh rate is chosen to 96Hz to thoroughly eliminate flickers, while preserving compatibility to cinema movies of 24 frames per second.
NASA Astrophysics Data System (ADS)
Bulan, Orhan; Bernal, Edgar A.; Loce, Robert P.; Wu, Wencheng
2013-03-01
Video cameras are widely deployed along city streets, interstate highways, traffic lights, stop signs and toll booths by entities that perform traffic monitoring and law enforcement. The videos captured by these cameras are typically compressed and stored in large databases. Performing a rapid search for a specific vehicle within a large database of compressed videos is often required and can be a time-critical life or death situation. In this paper, we propose video compression and decompression algorithms that enable fast and efficient vehicle or, more generally, event searches in large video databases. The proposed algorithm selects reference frames (i.e., I-frames) based on a vehicle having been detected at a specified position within the scene being monitored while compressing a video sequence. A search for a specific vehicle in the compressed video stream is performed across the reference frames only, which does not require decompression of the full video sequence as in traditional search algorithms. Our experimental results on videos captured in a local road show that the proposed algorithm significantly reduces the search space (thus reducing time and computational resources) in vehicle search tasks within compressed video streams, particularly those captured in light traffic volume conditions.
[Multimedia (visual collaboration) brings true nature of human life].
Tomita, N
2000-03-01
Videoconferencing system, high-quality visual collaboration, is bringing Multimedia into a society. Multimedia, high quality media such as TV broadcast, looks expensive because it requires broadband network with 100-200 Mpbs bandwidth or 3,700 analog telephone lines. However, thanks to the existing digital-line called N-ISDN (Narrow Integrated Service Digital Network) and PictureTel's audio/video compression technologies, it becomes far less expensive. N-ISDN provides 128 Kbps bandwidth, over twice wider than analog line. PictureTel's technology instantly compress audio/video signal into 1/1,000 in size. This means, with ISDN and PictureTel technology. Multimedia is materialized over even single ISDN line. This will allow doctor to remotely meet face-to-face with a medical specialist or patients to interview, conduct physical examinations, review records, and prescribe treatments. Bonding multiple ISDN lines will further improve video quality that enables remote surgery. Surgeon can perform an operation on internal organ by projecting motion video from Endoscope's CCD camera to large display monitor. Also, PictureTel provides advanced technologies of eliminating background noise generated by surgical knives or scalpels during surgery. This will allow sound of the breath or heartbeat be clearly transmitted to the remote site. Thus, Multimedia eliminates the barrier of distance, enabling people to be just at home, to be anywhere in the world, to undergo up-to-date medical treatment by expertise. This will reduce medical cost and allow people to live in the suburbs, in less pollution, closer to the nature. People will foster more open and collaborative environment by participating in local activities. Such community-oriented life-style will atone for mass consumption, materialistic economy in the past, then bring true happiness and welfare into our life after all.
Design of a motion JPEG (M/JPEG) adapter card
NASA Astrophysics Data System (ADS)
Lee, D. H.; Sudharsanan, Subramania I.
1994-05-01
In this paper we describe a design of a high performance JPEG (Joint Photographic Experts Group) Micro Channel adapter card. The card, tested on a range of PS/2 platforms (models 50 to 95), can complete JPEG operations on a 640 by 240 pixel image within 1/60 of a second, thus enabling real-time capture and display of high quality digital video. The card accepts digital pixels for either a YUV 4:2:2 or an RGB 4:4:4 pixel bus and has been shown to handle up to 2.05 MBytes/second of compressed data. The compressed data is transmitted to a host memory area by Direct Memory Access operations. The card uses a single C-Cube's CL550 JPEG processor that complies with the baseline JPEG. We give broad descriptions of the hardware that controls the video interface, CL550, and the system interface. Some critical design points that enhance the overall performance of the M/JPEG systems are pointed out. The control of the adapter card is achieved by an interrupt driven software that runs under DOS. The software performs a variety of tasks that include change of color space (RGB or YUV), change of quantization and Huffman tables, odd and even field control and some diagnostic operations.
HEVC optimizations for medical environments
NASA Astrophysics Data System (ADS)
Fernández, D. G.; Del Barrio, A. A.; Botella, Guillermo; García, Carlos; Meyer-Baese, Uwe; Meyer-Baese, Anke
2016-05-01
HEVC/H.265 is the most interesting and cutting-edge topic in the world of digital video compression, allowing to reduce by half the required bandwidth in comparison with the previous H.264 standard. Telemedicine services and in general any medical video application can benefit from the video encoding advances. However, the HEVC is computationally expensive to implement. In this paper a method for reducing the HEVC complexity in the medical environment is proposed. The sequences that are typically processed in this context contain several homogeneous regions. Leveraging these regions, it is possible to simplify the HEVC flow while maintaining a high-level quality. In comparison with the HM16.2 standard, the encoding time is reduced up to 75%, with a negligible quality loss. Moreover, the algorithm is straightforward to implement in any hardware platform.
Video Compression Study: h.265 vs h.264
NASA Technical Reports Server (NTRS)
Pryor, Jonathan
2016-01-01
H.265 video compression (also known as High Efficiency Video Encoding (HEVC)) promises to provide double the video quality at half the bandwidth, or the same quality at half the bandwidth of h.264 video compression [1]. This study uses a Tektronix PQA500 to determine the video quality gains by using h.265 encoding. This study also compares two video encoders to see how different implementations of h.264 and h.265 impact video quality at various bandwidths.
Implementation of real-time digital endoscopic image processing system
NASA Astrophysics Data System (ADS)
Song, Chul Gyu; Lee, Young Mook; Lee, Sang Min; Kim, Won Ky; Lee, Jae Ho; Lee, Myoung Ho
1997-10-01
Endoscopy has become a crucial diagnostic and therapeutic procedure in clinical areas. Over the past four years, we have developed a computerized system to record and store clinical data pertaining to endoscopic surgery of laparascopic cholecystectomy, pelviscopic endometriosis, and surgical arthroscopy. In this study, we developed a computer system, which is composed of a frame grabber, a sound board, a VCR control board, a LAN card and EDMS. Also, computer system controls peripheral instruments such as a color video printer, a video cassette recorder, and endoscopic input/output signals. Digital endoscopic data management system is based on open architecture and a set of widely available industry standards; namely Microsoft Windows as an operating system, TCP/IP as a network protocol and a time sequential database that handles both images and speech. For the purpose of data storage, we used MOD and CD- R. Digital endoscopic system was designed to be able to store, recreate, change, and compress signals and medical images. Computerized endoscopy enables us to generate and manipulate the original visual document, making it accessible to a virtually unlimited number of physicians.
Video bandwidth compression system
NASA Astrophysics Data System (ADS)
Ludington, D.
1980-08-01
The objective of this program was the development of a Video Bandwidth Compression brassboard model for use by the Air Force Avionics Laboratory, Wright-Patterson Air Force Base, in evaluation of bandwidth compression techniques for use in tactical weapons and to aid in the selection of particular operational modes to be implemented in an advanced flyable model. The bandwidth compression system is partitioned into two major divisions: the encoder, which processes the input video with a compression algorithm and transmits the most significant information; and the decoder where the compressed data is reconstructed into a video image for display.
Network-linked long-time recording high-speed video camera system
NASA Astrophysics Data System (ADS)
Kimura, Seiji; Tsuji, Masataka
2001-04-01
This paper describes a network-oriented, long-recording-time high-speed digital video camera system that utilizes an HDD (Hard Disk Drive) as a recording medium. Semiconductor memories (DRAM, etc.) are the most common image data recording media with existing high-speed digital video cameras. They are extensively used because of their advantage of high-speed writing and reading of picture data. The drawback is that their recording time is limited to only several seconds because the data amount is very large. A recording time of several seconds is sufficient for many applications. However, a much longer recording time is required in some applications where an exact prediction of trigger timing is hard to make. In the Late years, the recording density of the HDD has been dramatically improved, which has attracted more attention to its value as a long-recording-time medium. We conceived an idea that we would be able to build a compact system that makes possible a long time recording if the HDD can be used as a memory unit for high-speed digital image recording. However, the data rate of such a system, capable of recording 640 X 480 pixel resolution pictures at 500 frames per second (fps) with 8-bit grayscale is 153.6 Mbyte/sec., and is way beyond the writing speed of the commonly used HDD. So, we developed a dedicated image compression system and verified its capability to lower the data rate from the digital camera to match the HDD writing rate.
A Secure and Robust Object-Based Video Authentication System
NASA Astrophysics Data System (ADS)
He, Dajun; Sun, Qibin; Tian, Qi
2004-12-01
An object-based video authentication system, which combines watermarking, error correction coding (ECC), and digital signature techniques, is presented for protecting the authenticity between video objects and their associated backgrounds. In this system, a set of angular radial transformation (ART) coefficients is selected as the feature to represent the video object and the background, respectively. ECC and cryptographic hashing are applied to those selected coefficients to generate the robust authentication watermark. This content-based, semifragile watermark is then embedded into the objects frame by frame before MPEG4 coding. In watermark embedding and extraction, groups of discrete Fourier transform (DFT) coefficients are randomly selected, and their energy relationships are employed to hide and extract the watermark. The experimental results demonstrate that our system is robust to MPEG4 compression, object segmentation errors, and some common object-based video processing such as object translation, rotation, and scaling while securely preventing malicious object modifications. The proposed solution can be further incorporated into public key infrastructure (PKI).
Robust 3D DFT video watermarking
NASA Astrophysics Data System (ADS)
Deguillaume, Frederic; Csurka, Gabriela; O'Ruanaidh, Joseph J.; Pun, Thierry
1999-04-01
This paper proposes a new approach for digital watermarking and secure copyright protection of videos, the principal aim being to discourage illicit copying and distribution of copyrighted material. The method presented here is based on the discrete Fourier transform (DFT) of three dimensional chunks of video scene, in contrast with previous works on video watermarking where each video frame was marked separately, or where only intra-frame or motion compensation parameters were marked in MPEG compressed videos. Two kinds of information are hidden in the video: a watermark and a template. Both are encoded using an owner key to ensure the system security and are embedded in the 3D DFT magnitude of video chunks. The watermark is a copyright information encoded in the form of a spread spectrum signal. The template is a key based grid and is used to detect and invert the effect of frame-rate changes, aspect-ratio modification and rescaling of frames. The template search and matching is performed in the log-log-log map of the 3D DFT magnitude. The performance of the presented technique is evaluated experimentally and compared with a frame-by-frame 2D DFT watermarking approach.
An improvement analysis on video compression using file segmentation
NASA Astrophysics Data System (ADS)
Sharma, Shubhankar; Singh, K. John; Priya, M.
2017-11-01
From the past two decades the extreme evolution of the Internet has lead a massive rise in video technology and significantly video consumption over the Internet which inhabits the bulk of data traffic in general. Clearly, video consumes that so much data size on the World Wide Web, to reduce the burden on the Internet and deduction of bandwidth consume by video so that the user can easily access the video data.For this, many video codecs are developed such as HEVC/H.265 and V9. Although after seeing codec like this one gets a dilemma of which would be improved technology in the manner of rate distortion and the coding standard.This paper gives a solution about the difficulty for getting low delay in video compression and video application e.g. ad-hoc video conferencing/streaming or observation by surveillance. Also this paper describes the benchmark of HEVC and V9 technique of video compression on subjective oral estimations of High Definition video content, playback on web browsers. Moreover, this gives the experimental ideology of dividing the video file into several segments for compression and putting back together to improve the efficiency of video compression on the web as well as on the offline mode.
Visually enhanced CCTV digital surveillance utilizing Intranet and Internet.
Ozaki, Nobuyuki
2002-07-01
This paper describes a solution for integrated plant supervision utilizing closed circuit television (CCTV) digital surveillance. Three basic requirements are first addressed as the platform of the system, with discussion on the suitable video compression. The system configuration is described in blocks. The system provides surveillance functionality: real-time monitoring, and process analysis functionality: a troubleshooting tool. This paper describes the formulation of practical performance design for determining various encoder parameters. It also introduces image processing techniques for enhancing the original CCTV digital image to lessen the burden on operators. Some screenshots are listed for the surveillance functionality. For the process analysis, an image searching filter supported by image processing techniques is explained with screenshots. Multimedia surveillance, which is the merger with process data surveillance, or the SCADA system, is also explained.
Detection of inter-frame forgeries in digital videos.
K, Sitara; Mehtre, B M
2018-05-26
Videos are acceptable as evidence in the court of law, provided its authenticity and integrity are scientifically validated. Videos recorded by surveillance systems are susceptible to malicious alterations of visual content by perpetrators locally or remotely. Such malicious alterations of video contents (called video forgeries) are categorized into inter-frame and intra-frame forgeries. In this paper, we propose inter-frame forgery detection techniques using tamper traces from spatio-temporal and compressed domains. Pristine videos containing frames that are recorded during sudden camera zooming event, may get wrongly classified as tampered videos leading to an increase in false positives. To address this issue, we propose a method for zooming detection and it is incorporated in video tampering detection. Frame shuffling detection, which was not explored so far is also addressed in our work. Our method is capable of differentiating various inter-frame tamper events and its localization in the temporal domain. The proposed system is tested on 23,586 videos of which 2346 are pristine and rest of them are candidates of inter-frame forged videos. Experimental results show that we have successfully detected frame shuffling with encouraging accuracy rates. We have achieved improved accuracy on forgery detection in frame insertion, frame deletion and frame duplication. Copyright © 2018. Published by Elsevier B.V.
Method of transmission of dynamic multibit digital images from micro-unmanned aerial vehicles
NASA Astrophysics Data System (ADS)
Petrov, E. P.; Kharina, N. L.
2018-01-01
In connection with successful usage of nanotechnologies in remote sensing great attention is paid to the systems in micro-unmanned aerial vehicles (MUAVs) capable to provide high spatial resolution of dynamic multibit digital images (MDI). Limited energy resources on board the MUAV do not allow transferring a large amount of video information in the shortest possible time. It keeps back the broad development of MUAV. The search for methods to shorten the transmission time of dynamic MDIs from MUAV over the radio channel leads to the methods of MDI compression without computational operations onboard the MUAV. The known compression codecs of video information can not be applied because of the limited energy resources. In this paper we propose a method for reducing the transmission time of dynamic MDIs without computational operations and distortions onboard the MUAV. To develop the method a mathematical apparatus of the theory of conditional Markov processes with discrete arguments was used. On its basis a mathematical model for the transformation of the MDI represented by binary images (BI) in the MDI, consisting of groups of neighboring BIs (GBI) transmitted by multiphase (MP) signals, is constructed. The algorithm for multidimensional nonlinear filtering of MP signals is synthesized, realizing the statistical redundancy of the MDI to compensate for the noise stability losses caused by the use of MP signals.
NASA Astrophysics Data System (ADS)
Papers are presented on ISDN, mobile radio systems and techniques for digital connectivity, centralized and distributed algorithms in computer networks, communications networks, quality assurance and impact on cost, adaptive filters in communications, the spread spectrum, signal processing, video communication techniques, and digital satellite services. Topics discussed include performance evaluation issues for integrated protocols, packet network operations, the computer network theory and multiple-access, microwave single sideband systems, switching architectures, fiber optic systems, wireless local communications, modulation, coding, and synchronization, remote switching, software quality, transmission, and expert systems in network operations. Consideration is given to wide area networks, image and speech processing, office communications application protocols, multimedia systems, customer-controlled network operations, digital radio systems, channel modeling and signal processing in digital communications, earth station/on-board modems, computer communications system performance evaluation, source encoding, compression, and quantization, and adaptive communications systems.
Digital media in the home: technical and research challenges
NASA Astrophysics Data System (ADS)
Ribas-Corbera, Jordi
2005-03-01
This article attempts to identify some of the technology and research challenges facing the digital media industry in the future. We first discuss several trends in the industry, such as the rapid growth of broadband Internet networks and the emergence of networking and media-capable devices in the home. Next, we present technical challenges that result from these trends, such as effective media interoperability in devices, and provide a brief overview of Windows Media, which is one of the technologies in the market attempting to address these challenges. Finally, given these trends and the state of the art, we argue that further research on data compression, encoder optimization, and multi-format transcoding can potentially make a significant technical and business impact in digital media. We also explore the reasons that research on related techniques such as wavelets or scalable video coding is having a relatively minor impact in today"s practical digital media systems.
The use of digital images in pathology.
Furness, P N
1997-11-01
Digital images are routinely used by the publishing industry, but most diagnostic pathologists are unfamiliar with the technology and its possibilities. This review aims to explain the basic principles of digital image acquisition, storage, manipulation and use, and the possibilities provided not only in research, but also in teaching and in routine diagnostic pathology. Images of natural objects are usually expressed digitally as 'bitmaps'--rectilinear arrays of small dots. The size of each dot can vary, but so can its information content in terms, for example, of colour, greyscale or opacity. Various file formats and compression algorithms are available. Video cameras connected to microscopes are familiar to most pathologists; video images can be converted directly to a digital form by a suitably equipped computer. Digital cameras and scanners are alternative acquisition tools of relevance to pathologists. Once acquired, a digital image can easily be subjected to the digital equivalent of any conventional darkroom manipulation and modern software allows much more flexibility, to such an extent that a new tool for scientific fraud has been created. For research, image enhancement and analysis is an increasingly powerful and affordable tool. Morphometric measurements are, after many predictions, at last beginning to be part of the toolkit of the diagnostic pathologist. In teaching, the potential to create dramatic yet informative presentations is demonstrated daily by the publishing industry; such methods are readily applicable to the classroom. The combination of digital images and the Internet raises many possibilities; for example, instead of seeking one expert diagnostic opinion, one could simultaneously seek the opinion of many, all around the globe. It is inevitable that in the coming years the use of digital images will spread from the laboratory to the medical curriculum and to the whole of diagnostic pathology.
Digital video technology, today and tomorrow
NASA Astrophysics Data System (ADS)
Liberman, J.
1994-10-01
Digital video is probably computing's fastest moving technology today. Just three years ago, the zenith of digital video technology on the PC was the successful marriage of digital text and graphics with analog audio and video by means of expensive analog laser disc players and video overlay boards. The state of the art involves two different approaches to fully digital video on computers: hardware-assisted and software-only solutions.
Adaptive compressed sensing of multi-view videos based on the sparsity estimation
NASA Astrophysics Data System (ADS)
Yang, Senlin; Li, Xilong; Chong, Xin
2017-11-01
The conventional compressive sensing for videos based on the non-adaptive linear projections, and the measurement times is usually set empirically. As a result, the quality of videos reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was described. Then an estimation method for the sparsity of multi-view videos was proposed based on the two dimensional discrete wavelet transform (2D DWT). With an energy threshold given beforehand, the DWT coefficients were processed with both energy normalization and sorting by descending order, and the sparsity of the multi-view video can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of video frame effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparsity estimated with the energy threshold provided, the proposed method can ensure the reconstruction quality of multi-view videos.
NASA Astrophysics Data System (ADS)
Al-Hayani, Nazar; Al-Jawad, Naseer; Jassim, Sabah A.
2014-05-01
Video compression and encryption became very essential in a secured real time video transmission. Applying both techniques simultaneously is one of the challenges where the size and the quality are important in multimedia transmission. In this paper we proposed a new technique for video compression and encryption. Both encryption and compression are based on edges extracted from the high frequency sub-bands of wavelet decomposition. The compression algorithm based on hybrid of: discrete wavelet transforms, discrete cosine transform, vector quantization, wavelet based edge detection, and phase sensing. The compression encoding algorithm treats the video reference and non-reference frames in two different ways. The encryption algorithm utilized A5 cipher combined with chaotic logistic map to encrypt the significant parameters and wavelet coefficients. Both algorithms can be applied simultaneously after applying the discrete wavelet transform on each individual frame. Experimental results show that the proposed algorithms have the following features: high compression, acceptable quality, and resistance to the statistical and bruteforce attack with low computational processing.
Indexing, Browsing, and Searching of Digital Video.
ERIC Educational Resources Information Center
Smeaton, Alan F.
2004-01-01
Presents a literature review that covers the following topics related to indexing, browsing, and searching of digital video: video coding and standards; conventional approaches to accessing digital video; automatically structuring and indexing digital video; searching, browsing, and summarization; measurement and evaluation of the effectiveness of…
The integrated design and archive of space-borne signal processing and compression coding
NASA Astrophysics Data System (ADS)
He, Qiang-min; Su, Hao-hang; Wu, Wen-bo
2017-10-01
With the increasing demand of users for the extraction of remote sensing image information, it is very urgent to significantly enhance the whole system's imaging quality and imaging ability by using the integrated design to achieve its compact structure, light quality and higher attitude maneuver ability. At this present stage, the remote sensing camera's video signal processing unit and image compression and coding unit are distributed in different devices. The volume, weight and consumption of these two units is relatively large, which unable to meet the requirements of the high mobility remote sensing camera. This paper according to the high mobility remote sensing camera's technical requirements, designs a kind of space-borne integrated signal processing and compression circuit by researching a variety of technologies, such as the high speed and high density analog-digital mixed PCB design, the embedded DSP technology and the image compression technology based on the special-purpose chips. This circuit lays a solid foundation for the research of the high mobility remote sensing camera.
Real-time 3D video compression for tele-immersive environments
NASA Astrophysics Data System (ADS)
Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William
2006-01-01
Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).
Introduction to study and simulation of low rate video coding schemes
NASA Technical Reports Server (NTRS)
1992-01-01
During this period, the development of simulators for the various HDTV systems proposed to the FCC were developed. These simulators will be tested using test sequences from the MPEG committee. The results will be extrapolated to HDTV video sequences. Currently, the simulator for the compression aspects of the Advanced Digital Television (ADTV) was completed. Other HDTV proposals are at various stages of development. A brief overview of the ADTV system is given. Some coding results obtained using the simulator are discussed. These results are compared to those obtained using the CCITT H.261 standard. These results in the context of the CCSDS specifications are evaluated and some suggestions as to how the ADTV system could be implemented in the NASA network are made.
Improved Techniques for Video Compression and Communication
ERIC Educational Resources Information Center
Chen, Haoming
2016-01-01
Video compression and communication has been an important field over the past decades and critical for many applications, e.g., video on demand, video-conferencing, and remote education. In many applications, providing low-delay and error-resilient video transmission and increasing the coding efficiency are two major challenges. Low-delay and…
Lossless Compression of JPEG Coded Photo Collections.
Wu, Hao; Sun, Xiaoyan; Yang, Jingyu; Zeng, Wenjun; Wu, Feng
2016-04-06
The explosion of digital photos has posed a significant challenge to photo storage and transmission for both personal devices and cloud platforms. In this paper, we propose a novel lossless compression method to further reduce the size of a set of JPEG coded correlated images without any loss of information. The proposed method jointly removes inter/intra image redundancy in the feature, spatial, and frequency domains. For each collection, we first organize the images into a pseudo video by minimizing the global prediction cost in the feature domain. We then present a hybrid disparity compensation method to better exploit both the global and local correlations among the images in the spatial domain. Furthermore, the redundancy between each compensated signal and the corresponding target image is adaptively reduced in the frequency domain. Experimental results demonstrate the effectiveness of the proposed lossless compression method. Compared to the JPEG coded image collections, our method achieves average bit savings of more than 31%.
Energy Efficient Image/Video Data Transmission on Commercial Multi-Core Processors
Lee, Sungju; Kim, Heegon; Chung, Yongwha; Park, Daihee
2012-01-01
In transmitting image/video data over Video Sensor Networks (VSNs), energy consumption must be minimized while maintaining high image/video quality. Although image/video compression is well known for its efficiency and usefulness in VSNs, the excessive costs associated with encoding computation and complexity still hinder its adoption for practical use. However, it is anticipated that high-performance handheld multi-core devices will be used as VSN processing nodes in the near future. In this paper, we propose a way to improve the energy efficiency of image and video compression with multi-core processors while maintaining the image/video quality. We improve the compression efficiency at the algorithmic level or derive the optimal parameters for the combination of a machine and compression based on the tradeoff between the energy consumption and the image/video quality. Based on experimental results, we confirm that the proposed approach can improve the energy efficiency of the straightforward approach by a factor of 2∼5 without compromising image/video quality. PMID:23202181
An Attention-Information-Based Spatial Adaptation Framework for Browsing Videos via Mobile Devices
NASA Astrophysics Data System (ADS)
Li, Houqiang; Wang, Yi; Chen, Chang Wen
2007-12-01
With the growing popularity of personal digital assistant devices and smart phones, more and more consumers are becoming quite enthusiastic to appreciate videos via mobile devices. However, limited display size of the mobile devices has been imposing significant barriers for users to enjoy browsing high-resolution videos. In this paper, we present an attention-information-based spatial adaptation framework to address this problem. The whole framework includes two major parts: video content generation and video adaptation system. During video compression, the attention information in video sequences will be detected using an attention model and embedded into bitstreams with proposed supplement-enhanced information (SEI) structure. Furthermore, we also develop an innovative scheme to adaptively adjust quantization parameters in order to simultaneously improve the quality of overall encoding and the quality of transcoding the attention areas. When the high-resolution bitstream is transmitted to mobile users, a fast transcoding algorithm we developed earlier will be applied to generate a new bitstream for attention areas in frames. The new low-resolution bitstream containing mostly attention information, instead of the high-resolution one, will be sent to users for display on the mobile devices. Experimental results show that the proposed spatial adaptation scheme is able to improve both subjective and objective video qualities.
An ROI multi-resolution compression method for 3D-HEVC
NASA Astrophysics Data System (ADS)
Ti, Chunli; Guan, Yudong; Xu, Guodong; Teng, Yidan; Miao, Xinyuan
2017-09-01
3D High Efficiency Video Coding (3D-HEVC) provides a significant potential on increasing the compression ratio of multi-view RGB-D videos. However, the bit rate still rises dramatically with the improvement of the video resolution, which will bring challenges to the transmission network, especially the mobile network. This paper propose an ROI multi-resolution compression method for 3D-HEVC to better preserve the information in ROI on condition of limited bandwidth. This is realized primarily through ROI extraction and compression multi-resolution preprocessed video as alternative data according to the network conditions. At first, the semantic contours are detected by the modified structured forests to restrain the color textures inside objects. The ROI is then determined utilizing the contour neighborhood along with the face region and foreground area of the scene. Secondly, the RGB-D videos are divided into slices and compressed via 3D-HEVC under different resolutions for selection by the audiences and applications. Afterwards, the reconstructed low-resolution videos from 3D-HEVC encoder are directly up-sampled via Laplace transformation and used to replace the non-ROI areas of the high-resolution videos. Finally, the ROI multi-resolution compressed slices are obtained by compressing the ROI preprocessed videos with 3D-HEVC. The temporal and special details of non-ROI are reduced in the low-resolution videos, so the ROI will be better preserved by the encoder automatically. Experiments indicate that the proposed method can keep the key high-frequency information with subjective significance while the bit rate is reduced.
Architecture design of motion estimation for ITU-T H.263
NASA Astrophysics Data System (ADS)
Ku, Chung-Wei; Lin, Gong-Sheng; Chen, Liang-Gee; Lee, Yung-Ping
1997-01-01
Digitalized video and audio system has become the trend of the progress in multimedia, because it provides great performance in quality and feasibility of processing. However, as the huge amount of information is needed while the bandwidth is limitted, data compression plays an important role in the system. Say, for a 176 x 144 monochromic sequence with 10 frames/sec frame rate, the bandwidth is about 2Mbps. This wastes much channel resource and limits the applications. MPEG (moving picttre ezpert groip) standardizes the video codec scheme, and it performs high compression ratio while providing good quality. MPEG-i is used for the frame size about 352 x 240 and 30 frames per second, and MPEG-2 provides scalibility and can be applied on scenes with higher definition, say HDTV (high definition television). On the other hand, some applications concerns the very low bit-rate, such as videophone and video-conferencing. Because the channel bandwidth is much limitted in telephone network, a very high compression ratio must be required. ITU-T announced the H.263 video coding standards to meet the above requirements.8 According to the simulation results of TMN-5,22 it outperforms 11.263 with little overhead of complexity. Since wireless communication is the trend in the near future, low power design of the video codec is an important issue for portable visual telephone. Motion estimation is the most computation consuming parts in the whole video codec. About 60% of the computation is spent on this parts for the encoder. Several architectures were proposed for efficient processing of block matching algorithms. In this paper, in order to meet the requirements of 11.263 and the expectation of low power consumption, a modified sandwich architecture in21 is proposed. Based on the parallel processing philosophy, low power is expected and the generation of either one motion vector or four motion vectors with half-pixel accuracy is achieved concurrently. In addition, we will present our solution how to solve the other addition modes in 11.263 with the proposed architecture.
Feasibility of video codec algorithms for software-only playback
NASA Astrophysics Data System (ADS)
Rodriguez, Arturo A.; Morse, Ken
1994-05-01
Software-only video codecs can provide good playback performance in desktop computers with a 486 or 68040 CPU running at 33 MHz without special hardware assistance. Typically, playback of compressed video can be categorized into three tasks: the actual decoding of the video stream, color conversion, and the transfer of decoded video data from system RAM to video RAM. By current standards, good playback performance is the decoding and display of video streams of 320 by 240 (or larger) compressed frames at 15 (or greater) frames-per- second. Software-only video codecs have evolved by modifying and tailoring existing compression methodologies to suit video playback in desktop computers. In this paper we examine the characteristics used to evaluate software-only video codec algorithms, namely: image fidelity (i.e., image quality), bandwidth (i.e., compression) ease-of-decoding (i.e., playback performance), memory consumption, compression to decompression asymmetry, scalability, and delay. We discuss the tradeoffs among these variables and the compromises that can be made to achieve low numerical complexity for software-only playback. Frame- differencing approaches are described since software-only video codecs typically employ them to enhance playback performance. To complement other papers that appear in this session of the Proceedings, we review methods derived from binary pattern image coding since these methods are amenable for software-only playback. In particular, we introduce a novel approach called pixel distribution image coding.
Countermeasures for unintentional and intentional video watermarking attacks
NASA Astrophysics Data System (ADS)
Deguillaume, Frederic; Csurka, Gabriela; Pun, Thierry
2000-05-01
These last years, the rapidly growing digital multimedia market has revealed an urgent need for effective copyright protection mechanisms. Therefore, digital audio, image and video watermarking has recently become a very active area of research, as a solution to this problem. Many important issues have been pointed out, one of them being the robustness to non-intentional and intentional attacks. This paper studies some attacks and proposes countermeasures applied to videos. General attacks are lossy copying/transcoding such as MPEG compression and digital/analog (D/A) conversion, changes of frame-rate, changes of display format, and geometrical distortions. More specific attacks are sequence edition, and statistical attacks such as averaging or collusion. Averaging attack consists of averaging locally consecutive frames to cancel the watermark. This attack works well for schemes which embed random independent marks into frames. In the collusion attack the watermark is estimated from single frames (based on image denoising), and averaged over different scenes for better accuracy. The estimated watermark is then subtracted from each frame. Collusion requires that the same mark is embedded into all frames. The proposed countermeasures first ensures robustness to general attacks by spread spectrum encoding in the frequency domain and by the use of an additional template. Secondly, a Bayesian criterion, evaluating the probability of a correctly decoded watermark, is used for rejection of outliers, and to implement an algorithm against statistical attacks. The idea is to embed randomly chosen marks among a finite set of marks, into subsequences of videos which are long enough to resist averaging attacks, but short enough to avoid collusion attacks. The Bayesian criterion is needed to select the correct mark at the decoding step. Finally, the paper presents experimental results showing the robustness of the proposed method.
Compressed-domain video indexing techniques using DCT and motion vector information in MPEG video
NASA Astrophysics Data System (ADS)
Kobla, Vikrant; Doermann, David S.; Lin, King-Ip; Faloutsos, Christos
1997-01-01
Development of various multimedia applications hinges on the availability of fast and efficient storage, browsing, indexing, and retrieval techniques. Given that video is typically stored efficiently in a compressed format, if we can analyze the compressed representation directly, we can avoid the costly overhead of decompressing and operating at the pixel level. Compressed domain parsing of video has been presented in earlier work where a video clip is divided into shots, subshots, and scenes. In this paper, we describe key frame selection, feature extraction, and indexing and retrieval techniques that are directly applicable to MPEG compressed video. We develop a frame-type independent representation of the various types of frames present in an MPEG video in which al frames can be considered equivalent. Features are derived from the available DCT, macroblock, and motion vector information and mapped to a low-dimensional space where they can be accessed with standard database techniques. The spatial information is used as primary index while the temporal information is used to enhance the robustness of the system during the retrieval process. The techniques presented enable fast archiving, indexing, and retrieval of video. Our operational prototype typically takes a fraction of a second to retrieve similar video scenes from our database, with over 95% success.
Temporal compressive imaging for video
NASA Astrophysics Data System (ADS)
Zhou, Qun; Zhang, Linxia; Ke, Jun
2018-01-01
In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.
NASA Astrophysics Data System (ADS)
Hartman, Richard V.
1987-02-01
Advances in sophisticated algorithms and parallel VLSI processing have resulted in the capability for near real-time transmission of television pictures (optical and FLIR) via existing telephone lines, tactical radios, and military satellite channels. Concepts have been field demonstrated with production ready engineering development models using transform compression techniques. Preliminary design has been completed for packaging an existing command post version into a 20 pound 1/2 ATR enclosure for use on jeeps, backpacks, RPVs, helicopters, and reconnaissance aircraft. The system will also have a built-in error correction code 2 (ECC) unit, allowing operation via communicatons media exhibiting a bit error rate of 1 X 10-or better. In the past several years, two nearly simultaneous developments show promise of allowing the breakthrough needed to give the operational commander a practical means for obtaining pictorial information from the battlefield. And, he can obtain this information in near real time using available communications channels--his long sought after pictorial force multiplier: • High speed digital integrated circuitry that is affordable, and • An understanding of the practical applications of information theory. High speed digital integrated circuits allow an analog television picture to be nearly instantaneously converted to a digital serial bit stream so that it can be transmitted as rapidly or slowly as desired, depending on the available transmission channel bandwidth. Perhaps more importantly, digitizing the picture allows it to be stored and processed in a number of ways. Most typically, processing is performed to reduce the amount of data that must be transmitted, while still maintaining maximum picture quality. Reducing the amount of data that must be transmitted is important since it allows a narrower bandwidth in the scarce frequency spectrum to be used for transmission of pictures, or if only a narrow bandwidth is available, it takes less time for the picture to be transmitted. This process of reducing the amount of data that must be transmitted to represent a picture is called compression, truncation, or most typically, video compression. Keep in mind that the pictures you see on your home TV are nothing more than a series of still pictures displayed at a rate of 30 frames per second. If you grabbed one of those frames, digitized it, stored it in memory, and then transmitted it at the most rapid rate the bandwidth of your communications channel would allow, you would be using the so-called slow scan techniques.
Digital Video Over Space Systems and Networks
NASA Technical Reports Server (NTRS)
Grubbs, Rodney
2010-01-01
This slide presentation reviews the improvements and challenges that digital video provides over analog video. The use of digital video over IP options and trade offs, link integrity and latency are reviewed.
Code of Federal Regulations, 2012 CFR
2012-10-01
... transmissions, and video transmissions in the GSO Fixed-Satellite Service. 25.212 Section 25.212... Technical Standards § 25.212 Narrowband analog transmissions, digital transmissions, and video transmissions... narrowband and/or wideband digital services, including digital video services, if the maximum input spectral...
Code of Federal Regulations, 2010 CFR
2010-10-01
... transmissions, and video transmissions in the GSO Fixed-Satellite Service. 25.212 Section 25.212... Technical Standards § 25.212 Narrowband analog transmissions, digital transmissions, and video transmissions... narrowband and/or wideband digital services, including digital video services, if the maximum input spectral...
Code of Federal Regulations, 2011 CFR
2011-10-01
... transmissions, and video transmissions in the GSO Fixed-Satellite Service. 25.212 Section 25.212... Technical Standards § 25.212 Narrowband analog transmissions, digital transmissions, and video transmissions... narrowband and/or wideband digital services, including digital video services, if the maximum input spectral...
Image and Video Compression with VLSI Neural Networks
NASA Technical Reports Server (NTRS)
Fang, W.; Sheu, B.
1993-01-01
An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.
Fast and predictable video compression in software design and implementation of an H.261 codec
NASA Astrophysics Data System (ADS)
Geske, Dagmar; Hess, Robert
1998-09-01
The use of software codecs for video compression becomes commonplace in several videoconferencing applications. In order to reduce conflicts with other applications used at the same time, mechanisms for resource reservation on endsystems need to determine an upper bound for computing time used by the codec. This leads to the demand for predictable execution times of compression/decompression. Since compression schemes as H.261 inherently depend on the motion contained in the video, an adaptive admission control is required. This paper presents a data driven approach based on dynamical reduction of the number of processed macroblocks in peak situations. Besides the absolute speed is a point of interest. The question, whether and how software compression of high quality video is feasible on today's desktop computers, is examined.
Modeling of video compression effects on target acquisition performance
NASA Astrophysics Data System (ADS)
Cha, Jae H.; Preece, Bradley; Espinola, Richard L.
2009-05-01
The effect of video compression on image quality was investigated from the perspective of target acquisition performance modeling. Human perception tests were conducted recently at the U.S. Army RDECOM CERDEC NVESD, measuring identification (ID) performance on simulated military vehicle targets at various ranges. These videos were compressed with different quality and/or quantization levels utilizing motion JPEG, motion JPEG2000, and MPEG-4 encoding. To model the degradation on task performance, the loss in image quality is fit to an equivalent Gaussian MTF scaled by the Structural Similarity Image Metric (SSIM). Residual compression artifacts are treated as 3-D spatio-temporal noise. This 3-D noise is found by taking the difference of the uncompressed frame, with the estimated equivalent blur applied, and the corresponding compressed frame. Results show good agreement between the experimental data and the model prediction. This method has led to a predictive performance model for video compression by correlating various compression levels to particular blur and noise input parameters for NVESD target acquisition performance model suite.
Effects of video compression on target acquisition performance
NASA Astrophysics Data System (ADS)
Espinola, Richard L.; Cha, Jae; Preece, Bradley
2008-04-01
The bandwidth requirements of modern target acquisition systems continue to increase with larger sensor formats and multi-spectral capabilities. To obviate this problem, still and moving imagery can be compressed, often resulting in greater than 100 fold decrease in required bandwidth. Compression, however, is generally not error-free and the generated artifacts can adversely affect task performance. The U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate recently performed an assessment of various compression techniques on static imagery for tank identification. In this paper, we expand this initial assessment by studying and quantifying the effect of various video compression algorithms and their impact on tank identification performance. We perform a series of controlled human perception tests using three dynamic simulated scenarios: target moving/sensor static, target static/sensor static, sensor tracking the target. Results of this study will quantify the effect of video compression on target identification and provide a framework to evaluate video compression on future sensor systems.
ERIC Educational Resources Information Center
Mizell, Al P.; And Others
Distance learning involves students and faculty engaged in interactive instructional settings when they are at different locations. Compressed video is the live transmission of two-way auditory and visual signals at the same time between sites at different locations. The use of compressed video has expanded in recent years, ranging from use by the…
Image compression system and method having optimized quantization tables
NASA Technical Reports Server (NTRS)
Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)
1998-01-01
A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.
New Integrated Video and Graphics Technology: Digital Video Interactive.
ERIC Educational Resources Information Center
Optical Information Systems, 1987
1987-01-01
Describes digital video interactive (DVI), a new technology which combines the interactivity of the graphics capabilities in personal computers with the realism of high-quality motion video and multitrack audio in an all-digital integrated system. (MES)
ERIC Educational Resources Information Center
Liu, Rong; Unger, John A.; Scullion, Vicki A.
2014-01-01
Drawing data from an action-oriented research project for integrating digital video cameras into the reading process in pre-college courses, this study proposes using digital video cameras in reading summaries and responses to promote critical thinking and to teach social justice concepts. The digital video research project is founded on…
Analytical Tools for Cloudscope Ice Measurement
NASA Technical Reports Server (NTRS)
Arnott, W. Patrick
1998-01-01
The cloudscope is a ground or aircraft instrument for viewing ice crystals impacted on a sapphire window. It is essentially a simple optical microscope with an attached compact CCD video camera whose output is recorded on a Hi-8 mm video cassette recorder equipped with digital time and date recording capability. In aircraft operation the window is at a stagnation point of the flow so adiabatic compression heats the window to sublimate the ice crystals so that later impacting crystals can be imaged as well. A film heater is used for ground based operation to provide sublimation, and it can also be used to provide extra heat for aircraft operation. The compact video camera can be focused manually by the operator, and a beam splitter - miniature bulb combination provide illumination for night operation. Several shutter speeds are available to accommodate daytime illumination conditions by direct sunlight. The video images can be directly used to qualitatively assess the crystal content of cirrus clouds and contrails. Quantitative size spectra are obtained with the tools described in this report. Selected portions of the video images are digitized using a PCI bus frame grabber to form a short movie segment or stack using NIH (National Institute of Health) Image software with custom macros developed at DRI. The stack can be Fourier transform filtered with custom, easy to design filters to reduce most objectionable video artifacts. Particle quantification of each slice of the stack is performed using digital image analysis. Data recorded for each particle include particle number and centroid, frame number in the stack, particle area, perimeter, equivalent ellipse maximum and minimum radii, ellipse angle, and pixel number. Each valid particle in the stack is stamped with a unique number. This output can be used to obtain a semiquantitative appreciation of the crystal content. The particle information becomes the raw input for a subsequent program (FORTRAN) that synthesizes each slice and separates the new from the sublimating particles. The new particle information is used to generate quantitative particle concentration, area, and mass size spectra along with total concentration, solar extinction coefficient, and ice water content. This program directly creates output in html format for viewing with a web browser.
Data compression for full motion video transmission
NASA Technical Reports Server (NTRS)
Whyte, Wayne A., Jr.; Sayood, Khalid
1991-01-01
Clearly transmission of visual information will be a major, if not dominant, factor in determining the requirements for, and assessing the performance of the Space Exploration Initiative (SEI) communications systems. Projected image/video requirements which are currently anticipated for SEI mission scenarios are presented. Based on this information and projected link performance figures, the image/video data compression requirements which would allow link closure are identified. Finally several approaches which could satisfy some of the compression requirements are presented and possible future approaches which show promise for more substantial compression performance improvement are discussed.
Application discussion of source coding standard in voyage data recorder
NASA Astrophysics Data System (ADS)
Zong, Yonggang; Zhao, Xiandong
2018-04-01
This paper analyzes the disadvantages of the audio and video compression coding technology used by Voyage Data Recorder, and combines the improvement of performance of audio and video acquisition equipment. The thinking of improving the audio and video compression coding technology of the voyage data recorder is proposed, and the feasibility of adopting the new compression coding technology is analyzed from economy and technology two aspects.
Quality evaluation of motion-compensated edge artifacts in compressed video.
Leontaris, Athanasios; Cosman, Pamela C; Reibman, Amy R
2007-04-01
Little attention has been paid to an impairment common in motion-compensated video compression: the addition of high-frequency (HF) energy as motion compensation displaces blocking artifacts off block boundaries. In this paper, we employ an energy-based approach to measure this motion-compensated edge artifact, using both compressed bitstream information and decoded pixels. We evaluate the performance of our proposed metric, along with several blocking and blurring metrics, on compressed video in two ways. First, ordinal scales are evaluated through a series of expectations that a good quality metric should satisfy: the objective evaluation. Then, the best performing metrics are subjectively evaluated. The same subjective data set is finally used to obtain interval scales to gain more insight. Experimental results show that we accurately estimate the percentage of the added HF energy in compressed video.
Video requirements for materials processing experiments in the space station US laboratory
NASA Technical Reports Server (NTRS)
Baugher, Charles R.
1989-01-01
Full utilization of the potential of the materials research on the Space Station can be achieved only if adequate means are available for interactive experimentation between the science facilities and ground-based investigators. Extensive video interfaces linking these three elements are the only alternative for establishing a viable relation. Because of the limit in the downlink capability, a comprehensive complement of on-board video processing, and video compression is needed. The application of video compression will be an absolute necessity since it's effectiveness will directly impact the quantity of data which will be available to ground investigator teams, and their ability to review the effects of process changes and the experiment progress. Video data compression utilization on the Space Station is discussed.
Creating Digital Video in Your School
ERIC Educational Resources Information Center
Bell, Ann
2005-01-01
Creating digital videos provides students with practice in critical 21st century communication skills, as the video production involves critical thinking, general observation, and analysis and perspective-making skills. Producing video helps students appreciate literature and other expressions of information and students creating digital video…
Distributing digital video to multiple computers
Murray, James A.
2004-01-01
Video is an effective teaching tool, and live video microscopy is especially helpful in teaching dissection techniques and the anatomy of small neural structures. Digital video equipment is more affordable now and allows easy conversion from older analog video devices. I here describe a simple technique for bringing digital video from one camera to all of the computers in a single room. This technique allows students to view and record the video from a single camera on a microscope. PMID:23493464
Automatic attention-based prioritization of unconstrained video for compression
NASA Astrophysics Data System (ADS)
Itti, Laurent
2004-06-01
We apply a biologically-motivated algorithm that selects visually-salient regions of interest in video streams to multiply-foveated video compression. Regions of high encoding priority are selected based on nonlinear integration of low-level visual cues, mimicking processing in primate occipital and posterior parietal cortex. A dynamic foveation filter then blurs (foveates) every frame, increasingly with distance from high-priority regions. Two variants of the model (one with continuously-variable blur proportional to saliency at every pixel, and the other with blur proportional to distance from three independent foveation centers) are validated against eye fixations from 4-6 human observers on 50 video clips (synthetic stimuli, video games, outdoors day and night home video, television newscast, sports, talk-shows, etc). Significant overlap is found between human and algorithmic foveations on every clip with one variant, and on 48 out of 50 clips with the other. Substantial compressed file size reductions by a factor 0.5 on average are obtained for foveated compared to unfoveated clips. These results suggest a general-purpose usefulness of the algorithm in improving compression ratios of unconstrained video.
NASA Astrophysics Data System (ADS)
Unaldi, Numan; Asari, Vijayan K.; Rahman, Zia-ur
2009-05-01
Recently we proposed a wavelet-based dynamic range compression algorithm to improve the visual quality of digital images captured from high dynamic range scenes with non-uniform lighting conditions. The fast image enhancement algorithm that provides dynamic range compression, while preserving the local contrast and tonal rendition, is also a good candidate for real time video processing applications. Although the colors of the enhanced images produced by the proposed algorithm are consistent with the colors of the original image, the proposed algorithm fails to produce color constant results for some "pathological" scenes that have very strong spectral characteristics in a single band. The linear color restoration process is the main reason for this drawback. Hence, a different approach is required for the final color restoration process. In this paper the latest version of the proposed algorithm, which deals with this issue is presented. The results obtained by applying the algorithm to numerous natural images show strong robustness and high image quality.
Discontinuity minimization for omnidirectional video projections
NASA Astrophysics Data System (ADS)
Alshina, Elena; Zakharchenko, Vladyslav
2017-09-01
Advances in display technologies both for head mounted devices and television panels demand resolution increase beyond 4K for source signal in virtual reality video streaming applications. This poses a problem of content delivery trough a bandwidth limited distribution networks. Considering a fact that source signal covers entire surrounding space investigation reviled that compression efficiency may fluctuate 40% in average depending on origin selection at the conversion stage from 3D space to 2D projection. Based on these knowledge the origin selection algorithm for video compression applications has been proposed. Using discontinuity entropy minimization function projection origin rotation may be defined to provide optimal compression results. Outcome of this research may be applied across various video compression solutions for omnidirectional content.
Coastal erosion and wetland change in Louisiana: selected USGS products
Williams, S. Jeffress; Reid, Jamey M.; Cross, VeeAnn A.; Polloni, Christopher F.
2003-01-01
This Digital Data Series (DDS) report is primarily a selection of USGS science products that were previously published as paper atlases and maps but are no longer available in their original form. We have made an attempt to preserve the paper atlases by having them scanned in an efficient compressed digital format that provides a print-on-demand as well as a programmed viewing capability of the original material. We included additional materials bearing on aspects to enhance the scientific understanding of coastal erosion and wetland loss in Louisiana. In addition, this report contains multimedia-based publications including photographs, a 48-minute video, and map tools to allow the user to experience the many scientifically based research activities that are in progress along the coast of Louisiana.
Digital mammography, cancer screening: Factors important for image compression
NASA Technical Reports Server (NTRS)
Clarke, Laurence P.; Blaine, G. James; Doi, Kunio; Yaffe, Martin J.; Shtern, Faina; Brown, G. Stephen; Winfield, Daniel L.; Kallergi, Maria
1993-01-01
The use of digital mammography for breast cancer screening poses several novel problems such as development of digital sensors, computer assisted diagnosis (CAD) methods for image noise suppression, enhancement, and pattern recognition, compression algorithms for image storage, transmission, and remote diagnosis. X-ray digital mammography using novel direct digital detection schemes or film digitizers results in large data sets and, therefore, image compression methods will play a significant role in the image processing and analysis by CAD techniques. In view of the extensive compression required, the relative merit of 'virtually lossless' versus lossy methods should be determined. A brief overview is presented here of the developments of digital sensors, CAD, and compression methods currently proposed and tested for mammography. The objective of the NCI/NASA Working Group on Digital Mammography is to stimulate the interest of the image processing and compression scientific community for this medical application and identify possible dual use technologies within the NASA centers.
Using Digital Videos to Enhance Teacher Preparation
ERIC Educational Resources Information Center
Dymond, Stacy K.; Bentz, Johnell L.
2006-01-01
The technology to produce high quality, digital videos is widely available, yet its use in teacher preparation remains largely overlooked. A digital video library was created to augment instruction in a special education methods course for preservice elementary education teachers. The videos illustrated effective strategies for working with…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-19
... 08-255] Closed Captioning of Video Programming; Closed Captioning Requirements for Digital Television... Captioning of Video Programming; Closed Captioning Requirements for Digital Television Receivers, Declaratory... 1594, January 13, 2009, is effective February 19, 2010. Video programming distributors must comply with...
Research on compression performance of ultrahigh-definition videos
NASA Astrophysics Data System (ADS)
Li, Xiangqun; He, Xiaohai; Qing, Linbo; Tao, Qingchuan; Wu, Di
2017-11-01
With the popularization of high-definition (HD) images and videos (1920×1080 pixels and above), there are even 4K (3840×2160) television signals and 8 K (8192×4320) ultrahigh-definition videos. The demand for HD images and videos is increasing continuously, along with the increasing data volume. The storage and transmission cannot be properly solved only by virtue of the expansion capacity of hard disks and the update and improvement of transmission devices. Based on the full use of the coding standard high-efficiency video coding (HEVC), super-resolution reconstruction technology, and the correlation between the intra- and the interprediction, we first put forward a "division-compensation"-based strategy to further improve the compression performance of a single image and frame I. Then, by making use of the above thought and HEVC encoder and decoder, a video compression coding frame is designed. HEVC is used inside the frame. Last, with the super-resolution reconstruction technology, the reconstructed video quality is further improved. The experiment shows that by the proposed compression method for a single image (frame I) and video sequence here, the performance is superior to that of HEVC in a low bit rate environment.
Real-time video compressing under DSP/BIOS
NASA Astrophysics Data System (ADS)
Chen, Qiu-ping; Li, Gui-ju
2009-10-01
This paper presents real-time MPEG-4 Simple Profile video compressing based on the DSP processor. The programming framework of video compressing is constructed using TMS320C6416 Microprocessor, TDS510 simulator and PC. It uses embedded real-time operating system DSP/BIOS and the API functions to build periodic function, tasks and interruptions etcs. Realize real-time video compressing. To the questions of data transferring among the system. Based on the architecture of the C64x DSP, utilized double buffer switched and EDMA data transfer controller to transit data from external memory to internal, and realize data transition and processing at the same time; the architecture level optimizations are used to improve software pipeline. The system used DSP/BIOS to realize multi-thread scheduling. The whole system realizes high speed transition of a great deal of data. Experimental results show the encoder can realize real-time encoding of 768*576, 25 frame/s video images.
H.264/AVC Video Compression on Smartphones
NASA Astrophysics Data System (ADS)
Sharabayko, M. P.; Markov, N. G.
2017-01-01
In this paper, we studied the usage of H.264/AVC video compression tools by the flagship smartphones. The results show that only a subset of tools is used, meaning that there is still a potential to achieve higher compression efficiency within the H.264/AVC standard, but the most advanced smartphones are already reaching the compression efficiency limit of H.264/AVC.
Study of information transfer optimization for communication satellites
NASA Technical Reports Server (NTRS)
Odenwalder, J. P.; Viterbi, A. J.; Jacobs, I. M.; Heller, J. A.
1973-01-01
The results are presented of a study of source coding, modulation/channel coding, and systems techniques for application to teleconferencing over high data rate digital communication satellite links. Simultaneous transmission of video, voice, data, and/or graphics is possible in various teleconferencing modes and one-way, two-way, and broadcast modes are considered. A satellite channel model including filters, limiter, a TWT, detectors, and an optimized equalizer is treated in detail. A complete analysis is presented for one set of system assumptions which exclude nonlinear gain and phase distortion in the TWT. Modulation, demodulation, and channel coding are considered, based on an additive white Gaussian noise channel model which is an idealization of an equalized channel. Source coding with emphasis on video data compression is reviewed, and the experimental facility utilized to test promising techniques is fully described.
Benoit, Justin L; Vogele, Jennifer; Hart, Kimberly W; Lindsell, Christopher J; McMullan, Jason T
2017-06-01
Bystander compression-only cardiopulmonary resuscitation (CPR) improves survival after out-of-hospital cardiac arrest. To broaden CPR training, 1-2min ultra-brief videos have been disseminated via the Internet and television. Our objective was to determine whether participants passively exposed to a televised ultra-brief video perform CPR better than unexposed controls. This before-and-after study was conducted with non-patients in an urban Emergency Department waiting room. The intervention was an ultra-brief CPR training video displayed via closed-circuit television 3-6 times/hour. Participants were unaware of the study and not told to watch the video. Pre-intervention, no video was displayed. Participants were asked to demonstrate compression-only CPR on a manikin. Performance was scored based on critical actions: check for responsiveness, call for help, begin compressions immediately, and correct hand placement, compression rate and depth. The primary outcome was the proportion of participants who performed all actions correctly. There were 50 control and 50 exposed participants. Mean age was 37, 51% were African-American, 52% were female, and 10% self-reported current CPR certification. There were no statistically significant differences in baseline characteristics between groups. The number of participants who performed all actions correctly was 0 (0%) control vs. 10 (20%) exposed (difference 20%, 95% confidence interval [CI] 8.9-31.1%, p<0.001). Correct compression rate and depth were 11 (22%) control vs. 22 (44%) exposed (22%, 95% CI 4.1-39.9%, p=0.019), and 5 (10%) control vs. 15 (30%) exposed (20%, 95% CI 4.8-35.2%, p=0.012), respectively. Passive ultra-brief video training is associated with improved performance of compression-only CPR. Copyright © 2017 Elsevier B.V. All rights reserved.
Using ARINC 818 Avionics Digital Video Bus (ADVB) for military displays
NASA Astrophysics Data System (ADS)
Alexander, Jon; Keller, Tim
2007-04-01
ARINC 818 Avionics Digital Video Bus (ADVB) is a new digital video interface and protocol standard developed especially for high bandwidth uncompressed digital video. The first draft of this standard, released in January of 2007, has been advanced by ARINC and the aerospace community to meet the acute needs of commercial aviation for higher performance digital video. This paper analyzes ARINC 818 for use in military display systems found in avionics, helicopters, and ground vehicles. The flexibility of ARINC 818 for the diverse resolutions, grayscales, pixel formats, and frame rates of military displays is analyzed as well as the suitability of ARINC 818 to support requirements for military video systems including bandwidth, latency, and reliability. Implementation issues relevant to military displays are presented.
Cerina, Luca; Iozzia, Luca; Mainardi, Luca
2017-11-14
In this paper, common time- and frequency-domain variability indexes obtained by pulse rate variability (PRV) series extracted from video-photoplethysmographic signal (vPPG) were compared with heart rate variability (HRV) parameters calculated from synchronized ECG signals. The dual focus of this study was to analyze the effect of different video acquisition frame-rates starting from 60 frames-per-second (fps) down to 7.5 fps and different video compression techniques using both lossless and lossy codecs on PRV parameters estimation. Video recordings were acquired through an off-the-shelf GigE Sony XCG-C30C camera on 60 young, healthy subjects (age 23±4 years) in the supine position. A fully automated, signal extraction method based on the Kanade-Lucas-Tomasi (KLT) algorithm for regions of interest (ROI) detection and tracking, in combination with a zero-phase principal component analysis (ZCA) signal separation technique was employed to convert the video frames sequence to a pulsatile signal. The frame-rate degradation was simulated on video recordings by directly sub-sampling the ROI tracking and signal extraction modules, to correctly mimic videos recorded at a lower speed. The compression of the videos was configured to avoid any frame rejection caused by codec quality leveling, FFV1 codec was used for lossless compression and H.264 with variable quality parameter as lossy codec. The results showed that a reduced frame-rate leads to inaccurate tracking of ROIs, increased time-jitter in the signals dynamics and local peak displacements, which degrades the performances in all the PRV parameters. The root mean square of successive differences (RMSSD) and the proportion of successive differences greater than 50 ms (PNN50) indexes in time-domain and the low frequency (LF) and high frequency (HF) power in frequency domain were the parameters which highly degraded with frame-rate reduction. Such a degradation can be partially mitigated by up-sampling the measured signal at a higher frequency (namely 60 Hz). Concerning the video compression, the results showed that compression techniques are suitable for the storage of vPPG recordings, although lossless or intra-frame compression are to be preferred over inter-frame compression methods. FFV1 performances are very close to the uncompressed (UNC) version with less than 45% disk size. H.264 showed a degradation of the PRV estimation directly correlated with the increase of the compression ratio.
Digital Video Over Space Systems and Networks
NASA Technical Reports Server (NTRS)
Grubbs, Rodney
2010-01-01
This slide presentation reviews the use of digital video with space systems and networks. The earliest use of video was the use of film precluding live viewing, which gave way to live television from space. This has given way to digital video using internet protocol for transmission. This has provided for many improvements with new challenges. Some of these ehallenges are reviewed. The change to digital video transmitted over space systems can provide incredible imagery, however the process must be viewed as an entire system, rather than piece-meal.
A video event trigger for high frame rate, high resolution video technology
NASA Astrophysics Data System (ADS)
Williams, Glenn L.
1991-12-01
When video replaces film the digitized video data accumulates very rapidly, leading to a difficult and costly data storage problem. One solution exists for cases when the video images represent continuously repetitive 'static scenes' containing negligible activity, occasionally interrupted by short events of interest. Minutes or hours of redundant video frames can be ignored, and not stored, until activity begins. A new, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term or short term changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pretrigger and post-trigger storage techniques are then adaptable for archiving the digital stream from only the significant video images.
A video event trigger for high frame rate, high resolution video technology
NASA Technical Reports Server (NTRS)
Williams, Glenn L.
1991-01-01
When video replaces film the digitized video data accumulates very rapidly, leading to a difficult and costly data storage problem. One solution exists for cases when the video images represent continuously repetitive 'static scenes' containing negligible activity, occasionally interrupted by short events of interest. Minutes or hours of redundant video frames can be ignored, and not stored, until activity begins. A new, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term or short term changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pretrigger and post-trigger storage techniques are then adaptable for archiving the digital stream from only the significant video images.
A Web-Based Video Digitizing System for the Study of Projectile Motion.
ERIC Educational Resources Information Center
Chow, John W.; Carlton, Les G.; Ekkekakis, Panteleimon; Hay, James G.
2000-01-01
Discusses advantages of a video-based, digitized image system for the study and analysis of projectile motion in the physics laboratory. Describes the implementation of a web-based digitized video system. (WRM)
Image compression evaluation for digital cinema: the case of Star Wars: Episode II
NASA Astrophysics Data System (ADS)
Schnuelle, David L.
2003-05-01
A program of evaluation of compression algorithms proposed for use in a digital cinema application is described and the results presented in general form. The work was intended to aid in the selection of a compression system to be used for the digital cinema release of Star Wars: Episode II, in May 2002. An additional goal was to provide feedback to the algorithm proponents on what parameters and performance levels the feature film industry is looking for in digital cinema compression. The primary conclusion of the test program is that any of the current digital cinema compression proponents will work for digital cinema distribution to today's theaters.
A "Journey in Feminist Theory Together": The "Doing Feminist Theory through Digital Video" Project
ERIC Educational Resources Information Center
Hurst, Rachel Alpha Johnston
2014-01-01
"Doing Feminist Theory Through Digital Video" is an assignment I designed for my undergraduate feminist theory course, where students created a short digital video on a concept in feminist theory. I outline the assignment and the pedagogical and epistemological frameworks that structured the assignment (digital storytelling,…
Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les
2012-12-01
This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression.
College curriculum-sharing via CTS. [Communications Technology Satellite
NASA Technical Reports Server (NTRS)
Hudson, H. E.; Guild, P. D.; Coll, D. C.; Lumb, D. R.
1975-01-01
Domestic communication satellites and video compression techniques will increase communication channel capacity and reduce cost of video transmission. NASA Ames Research Center, Stanford University and Carleton University are participants in an experiment to develop, demonstrate, and evaluate college course sharing techniques via satellite using video compression. The universities will exchange televised seminar and lecture courses via CTS. The experiment features real-time video compression with channel coding and quadra-phase modulation for reducing transmission bandwidth and power requirements. Evaluation plans and preliminary results of Carleton surveys on student attitudes to televised teaching are presented. Policy implications for the U.S. and Canada are outlined.
Survey of Compressed Video Applications: Higher Education, K-12, and the Private Sector, 1993.
ERIC Educational Resources Information Center
Cochenour, John; And Others
This paper presents the results of three surveys about live, two-way interactive video (compressed video) and discusses some possible trends in its use, applications, and technological development. Surveys are an Association for Educational Communications and Technology (AECT) survey that has not been completed; one from the "International…
RAPID: A random access picture digitizer, display, and memory system
NASA Technical Reports Server (NTRS)
Yakimovsky, Y.; Rayfield, M.; Eskenazi, R.
1976-01-01
RAPID is a system capable of providing convenient digital analysis of video data in real-time. It has two modes of operation. The first allows for continuous digitization of an EIA RS-170 video signal. Each frame in the video signal is digitized and written in 1/30 of a second into RAPID's internal memory. The second mode leaves the content of the internal memory independent of the current input video. In both modes of operation the image contained in the memory is used to generate an EIA RS-170 composite video output signal representing the digitized image in the memory so that it can be displayed on a monitor.
ERIC Educational Resources Information Center
Bueno de Mesquita, Paul; Dean, Ross F.; Young, Betty J.
2010-01-01
Advances in digital video technology create opportunities for more detailed qualitative analyses of actual teaching practice in science and other subject areas. User-friendly digital cameras and highly developed, flexible video-analysis software programs have made the tasks of video capture, editing, transcription, and subsequent data analysis…
3D video coding: an overview of present and upcoming standards
NASA Astrophysics Data System (ADS)
Merkle, Philipp; Müller, Karsten; Wiegand, Thomas
2010-07-01
An overview of existing and upcoming 3D video coding standards is given. Various different 3D video formats are available, each with individual pros and cons. The 3D video formats can be separated into two classes: video-only formats (such as stereo and multiview video) and depth-enhanced formats (such as video plus depth and multiview video plus depth). Since all these formats exist of at least two video sequences and possibly additional depth data, efficient compression is essential for the success of 3D video applications and technologies. For the video-only formats the H.264 family of coding standards already provides efficient and widely established compression algorithms: H.264/AVC simulcast, H.264/AVC stereo SEI message, and H.264/MVC. For the depth-enhanced formats standardized coding algorithms are currently being developed. New and specially adapted coding approaches are necessary, as the depth or disparity information included in these formats has significantly different characteristics than video and is not displayed directly, but used for rendering. Motivated by evolving market needs, MPEG has started an activity to develop a generic 3D video standard within the 3DVC ad-hoc group. Key features of the standard are efficient and flexible compression of depth-enhanced 3D video representations and decoupling of content creation and display requirements.
Semiconductors: Still a Wide Open Frontier for Scientists/Engineers
NASA Astrophysics Data System (ADS)
Seiler, David G.
1997-10-01
A 1995 Business Week article described several features of the explosive use of semiconductor chips today: ``Booming'' personal computer markets are driving high demand for microprocessors and memory chips; (2) New information superhighway markets will `ignite' sales of multimedia and communication chips; and (3) Demand for digital-signal-processing and data-compression chips, which speed up video and graphics, is `red hot.' A Washington Post article by Stan Hinden said that technology is creating an unstoppable demand for electronic elements. This ``digital pervasiveness'' means that a semiconductor chip is going into almost every high-tech product that people buy - cars, televisions, video recorders, telephones, radios, alarm clocks, coffee pots, etc. ``Semiconductors are everywhere.'' Silicon and compound semiconductors are absolutely essential and are pervasive enablers for DoD operations and systems. DoD's Critical Technologies Plan of 1991 says that ``Semiconductor materials and microelectronics are critically important and appropriately lead the list of critical defense technologies.'' These trends continue unabated. This talk describes some of the frontiers of semiconductors today and shows how scientists and engineers can effectively contribute to its advancement. Cooperative, multidisciplinary efforts are increasing. Specific examples will be given for scanning capacitance microscopy and thin-film metrology.
ERIC Educational Resources Information Center
McConnell, Terry
2004-01-01
Monica Adams, head librarian at Robinson Secondary in Fairfax country, Virginia, states that librarians should have the technical knowledge to support projects related to digital video editing. The process of digital video editing and the cables, storage issues and the computer system with software is described.
Low-complexity transcoding algorithm from H.264/AVC to SVC using data mining
NASA Astrophysics Data System (ADS)
Garrido-Cantos, Rosario; De Cock, Jan; Martínez, Jose Luis; Van Leuven, Sebastian; Cuenca, Pedro; Garrido, Antonio
2013-12-01
Nowadays, networks and terminals with diverse characteristics of bandwidth and capabilities coexist. To ensure a good quality of experience, this diverse environment demands adaptability of the video stream. In general, video contents are compressed to save storage capacity and to reduce the bandwidth required for its transmission. Therefore, if these compressed video streams were compressed using scalable video coding schemes, they would be able to adapt to those heterogeneous networks and a wide range of terminals. Since the majority of the multimedia contents are compressed using H.264/AVC, they cannot benefit from that scalability. This paper proposes a low-complexity algorithm to convert an H.264/AVC bitstream without scalability to scalable bitstreams with temporal scalability in baseline and main profiles by accelerating the mode decision task of the scalable video coding encoding stage using machine learning tools. The results show that when our technique is applied, the complexity is reduced by 87% while maintaining coding efficiency.
Evaluation of Digital Technology and Software Use among Business Education Teachers
ERIC Educational Resources Information Center
Ellis, Richard S.; Okpala, Comfort O.
2004-01-01
Digital video cameras are part of the evolution of multimedia digital products that have positive applications for educators, students, and industry. Multimedia digital video can be utilized by any personal computer and it allows the user to control, combine, and manipulate different types of media, such as text, sound, video, computer graphics,…
Harris, Kevin M; Schum, Kevin R; Knickelbine, Thomas; Hurrell, David G; Koehler, Jodi L; Longe, Terrence F
2003-08-01
Motion Picture Experts Group-2 (MPEG2) is a broadcast industry standard that allows high-level compression of echocardiographic data. Validation of MPEG2 digital images compared with super VHS videotape has not been previously reported. Simultaneous super VHS videotape and MPEG2 digital images were acquired. In all, 4 experienced echocardiographers completed detailed reporting forms evaluating chamber size, ventricular function, regional wall-motion abnormalities, and measures of valvular regurgitation and stenosis in a blinded fashion. Comparisons between the 2 interpretations were then performed and intraobserver concordance was calculated for the various categories. A total of 80 paired comparisons were made. The overall concordance rate was 93.6% with most of the discrepancies being minor (4.1%). Concordance was 92.4% for left ventricle, 93.2% for right ventricle, 95.2% for regional wall-motion abnormalities, and 97.8% for valve stenosis. The mean grade of valvular regurgitation was similar for the 2 techniques. MPEG2 digital imaging offers excellent concordance compared with super VHS videotape.
Application of the advanced communications technology satellite for teleradiology and telemedicine
NASA Astrophysics Data System (ADS)
Stewart, Brent K.; Carter, Stephen J.; Rowberg, Alan H.
1995-05-01
The authors have an in-kind grant from NASA to investigate the application of the Advanced Communications Technology Satellite (ACTS) to teleradiology and telemedicine using the JPL developed ACTS Mobile Terminal (AMT) uplink. This experiment involves the transmission of medical imagery (CT, MR, CR, US and digitized radiographs including mammograms), between the ACTS/AMT and the University of Washington. This is accomplished by locating the AMT experiment van in various locations throughout Washington state, Idaho, Montana, Oregon and Hawaii. The medical images are transmitted from the ACTS to the downlink at the NASA Lewis Research Center (LeRC) in Cleveland, Ohio, consisting of AMT equipment and the high burst rate-link evaluation terminal (HBR-LET). These images are then routed from LeRC to the University of Washington School of Medicine (UWSoM) through the Internet and public switched Integrated Serviced Digital Network (ISDN). Once images arrive in the UW Radiology Department, they are reviewed using both video monitor softcopy and laser-printed hardcopy. Compressed video teleconferencing and transmission of real-time ultrasound video between the AMT van and the UWSoM are also tested. Image quality comparisons are made using both subjective diagnostic criteria and quantitative engineering analysis. Evaluation is performed during various weather conditions (including rain to assess rain fade compensation algorithms). Compression techniques also are tested to evaluate their effects on image quality, allowing further evaluation of portable teleradiology/telemedicine at lower data rates and providing useful information for additional applications (e.g., smaller remote units, shipboard, emergency disaster, etc.). The medical images received at the UWSoM over the ACTS are directly evaluated against the original digital images. The project demonstrates that a portable satellite-land connection can provide subspecialty consultation and education for rural and remote areas. The experiment is divided into three phases. Using the ACTS fixed-hopping beam, phase one involves testing connection of the AMT to medical imaging equipment and image transmission in various climates in western and eastern Washington state. The second phase involves satellite relay transmissions between the Inmarsat satellite and the ACTS/AMT through a ground station in Hawaii for medical imagery originating from either Okinawa, Japan or Kwajalein, in the Pacific. The third phase involves extended use of the ACTS steerable beam in Washington state, Idaho, Montanan and Oregon.
Digital characterization of a neuromorphic IRFPA
NASA Astrophysics Data System (ADS)
Caulfield, John T.; Fisher, John; Zadnik, Jerome A.; Mak, Ernest S.; Scribner, Dean A.
1995-05-01
This paper reports on the performance of the Neuromorphic IRFPA, the first IRFPA designed and fabricated to conduct temporal and spatial processing on the focal plane. The Neuromorphic IRFPA's unique on-chip processing capability can perform retina-like functions such as lateral inhibition and contrast enhancement, spatial and temporal filtering, image compression and edge enhancement, and logarithmic response. Previously, all evaluations of the Neuromorphic IRFPA camera have been performed on the analog video output. In the work leading up to this paper, the Neuromorphic was integrated to a digital recorder to collect quantitative laboratory and field data. This paper describes the operation and characterization of specific on-chip processes such as spatial and temporal kernel size control. The use of Neuromorphic on-chip processing in future IRFPAs is analyzed as applied to improving SNR via adaptive nonuniformity, charge handling, and dynamic range problems.
Composing across Multiple Media: A Case Study of Digital Video Production in a Fifth Grade Classroom
ERIC Educational Resources Information Center
Ranker, Jason
2008-01-01
This is a qualitative case study of two students' composing processes as they developed a documentary video about the Dominican Republic in an urban, public middle school classroom. While using a digital video editing program, the students moved across multiple media (the Web, digital video, books, and writing), drawing semiotic resources from…
Code of Federal Regulations, 2013 CFR
2013-04-01
... recordings and/or digital records shall be provided to the Commission upon request. (x) Video library log. A... events on video and/or digital recordings. The displayed date and time shall not significantly obstruct... each gaming machine change booth. (w) Video recording and/or digital record retention. (1) All video...
Code of Federal Regulations, 2012 CFR
2012-04-01
... recordings and/or digital records shall be provided to the Commission upon request. (x) Video library log. A... events on video and/or digital recordings. The displayed date and time shall not significantly obstruct... each gaming machine change booth. (w) Video recording and/or digital record retention. (1) All video...
Code of Federal Regulations, 2014 CFR
2014-04-01
... recordings and/or digital records shall be provided to the Commission upon request. (x) Video library log. A... events on video and/or digital recordings. The displayed date and time shall not significantly obstruct... each gaming machine change booth. (w) Video recording and/or digital record retention. (1) All video...
Axial Tomography from Digitized Real Time Radiography
DOE R&D Accomplishments Database
Zolnay, A. S.; McDonald, W. M.; Doupont, P. A.; McKinney, R. L.; Lee, M. M.
1985-01-18
Axial tomography from digitized real time radiographs provides a useful tool for industrial radiography and tomography. The components of this system are: x-ray source, image intensifier, video camera, video line extractor and digitizer, data storage and reconstruction computers. With this system it is possible to view a two dimensional x-ray image in real time at each angle of rotation and select the tomography plane of interest by choosing which video line to digitize. The digitization of a video line requires less than a second making data acquisition relatively short. Further improvements on this system are planned and initial results are reported.
Video coding for next-generation surveillance systems
NASA Astrophysics Data System (ADS)
Klasen, Lena M.; Fahlander, Olov
1997-02-01
Video is used as recording media in surveillance system and also more frequently by the Swedish Police Force. Methods for analyzing video using an image processing system have recently been introduced at the Swedish National Laboratory of Forensic Science, and new methods are in focus in a research project at Linkoping University, Image Coding Group. The accuracy of the result of those forensic investigations often depends on the quality of the video recordings, and one of the major problems when analyzing videos from crime scenes is the poor quality of the recordings. Enhancing poor image quality might add manipulative or subjective effects and does not seem to be the right way of getting reliable analysis results. The surveillance system in use today is mainly based on video techniques, VHS or S-VHS, and the weakest link is the video cassette recorder, (VCR). Multiplexers for selecting one of many camera outputs for recording is another problem as it often filters the video signal, and recording is limited to only one of the available cameras connected to the VCR. A way to get around the problem of poor recording is to simultaneously record all camera outputs digitally. It is also very important to build such a system bearing in mind that image processing analysis methods becomes more important as a complement to the human eye. Using one or more cameras gives a large amount of data, and the need for data compression is more than obvious. Crime scenes often involve persons or moving objects, and the available coding techniques are more or less useful. Our goal is to propose a possible system, being the best compromise with respect to what needs to be recorded, movements in the recorded scene, loss of information and resolution etc., to secure the efficient recording of the crime and enable forensic analysis. The preventative effective of having a well functioning surveillance system and well established image analysis methods is not to be neglected. Aspects of this next generation of digital surveillance systems are discussed in this paper.
2D-pattern matching image and video compression: theory, algorithms, and experiments.
Alzina, Marc; Szpankowski, Wojciech; Grama, Ananth
2002-01-01
In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv (1977, 1978) lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.
Distance Education Technology for the New Millennium Compressed Video Teaching. ZIFF Papiere 101.
ERIC Educational Resources Information Center
Keegan, Desmond
This monograph combines an examination of theoretical issues raised by the introduction of two-way video and similar systems into distance education (DE) with practical advice on using compressed video systems in DE programs. Presented in the first half of the monograph are the following: analysis of the intrinsic links between DE and technology…
A method of mobile video transmission based on J2ee
NASA Astrophysics Data System (ADS)
Guo, Jian-xin; Zhao, Ji-chun; Gong, Jing; Chun, Yang
2013-03-01
As 3G (3rd-generation) networks evolve worldwide, the rising demand for mobile video services and the enormous growth of video on the internet is creating major new revenue opportunities for mobile network operators and application developers. The text introduced a method of mobile video transmission based on J2ME, giving the method of video compressing, then describing the video compressing standard, and then describing the software design. The proposed mobile video method based on J2EE is a typical mobile multimedia application, which has a higher availability and a wide range of applications. The users can get the video through terminal devices such as phone.
Influence of audio triggered emotional attention on video perception
NASA Astrophysics Data System (ADS)
Torres, Freddy; Kalva, Hari
2014-02-01
Perceptual video coding methods attempt to improve compression efficiency by discarding visual information not perceived by end users. Most of the current approaches for perceptual video coding only use visual features ignoring the auditory component. Many psychophysical studies have demonstrated that auditory stimuli affects our visual perception. In this paper we present our study of audio triggered emotional attention and it's applicability to perceptual video coding. Experiments with movie clips show that the reaction time to detect video compression artifacts was longer when video was presented with the audio information. The results reported are statistically significant with p=0.024.
Digital Image Compression Using Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.
1993-01-01
The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.
47 CFR 79.107 - User interfaces provided by digital apparatus.
Code of Federal Regulations, 2014 CFR
2014-10-01
... SERVICES ACCESSIBILITY OF VIDEO PROGRAMMING Apparatus § 79.107 User interfaces provided by digital... States and designed to receive or play back video programming transmitted in digital format simultaneously with sound, including apparatus designed to receive or display video programming transmitted in...
Digital Literacy and Online Video: Undergraduate Students' Use of Online Video for Coursework
ERIC Educational Resources Information Center
Tiernan, Peter; Farren, Margaret
2017-01-01
This paper investigates how to enable undergraduate students' use of online video for coursework using a customised video retrieval system (VRS), in order to understand digital literacy with online video in practice. This study examines the key areas influencing the use of online video for assignments such as the learning value of video,…
Code of Federal Regulations, 2011 CFR
2011-04-01
... capability to display the date and time of recorded events on video and/or digital recordings. The displayed... digital record retention. (1) All video recordings and/or digital records of coverage provided by the.... (3) Duly authenticated copies of video recordings and/or digital records shall be provided to the...
Code of Federal Regulations, 2014 CFR
2014-04-01
... capability to display the date and time of recorded events on video and/or digital recordings. The displayed... digital record retention. (1) All video recordings and/or digital records of coverage provided by the.... (3) Duly authenticated copies of video recordings and/or digital records shall be provided to the...
Code of Federal Regulations, 2013 CFR
2013-04-01
... capability to display the date and time of recorded events on video and/or digital recordings. The displayed... digital record retention. (1) All video recordings and/or digital records of coverage provided by the.... (3) Duly authenticated copies of video recordings and/or digital records shall be provided to the...
Code of Federal Regulations, 2010 CFR
2010-04-01
... capability to display the date and time of recorded events on video and/or digital recordings. The displayed... digital record retention. (1) All video recordings and/or digital records of coverage provided by the.... (3) Duly authenticated copies of video recordings and/or digital records shall be provided to the...
Code of Federal Regulations, 2012 CFR
2012-04-01
... capability to display the date and time of recorded events on video and/or digital recordings. The displayed... digital record retention. (1) All video recordings and/or digital records of coverage provided by the.... (3) Duly authenticated copies of video recordings and/or digital records shall be provided to the...
Apparatus for Investigating Momentum and Energy Conservation With MBL and Video Analysis
NASA Astrophysics Data System (ADS)
George, Elizabeth; Vazquez-Abad, Jesus
1998-04-01
We describe the development and use of a laboratory setup that is appropriate for computer-aided student investigation of the principles of conservation of momentum and mechanical energy in collisions. The setup consists of two colliding carts on a low-friction track, with one of the carts (the target) attached to a spring, whose extension or compression takes the place of the pendulum's rise in the traditional ballistic pendulum apparatus. Position vs. time data for each cart are acquired either by using two motion sensors or by digitizing images obtained with a video camera. This setup allows students to examine the time history of momentum and mechanical energy during the entire collision process, rather than simply focusing on the before and after regions. We believe that this setup is suitable for helping students gain understanding as the processes involved are simple to follow visually, to manipulate, and to analyze.
ERIC Educational Resources Information Center
Halter, Christopher; Levin, James
2014-01-01
A three year study of digital video creation in higher education investigated the impact that creating short digital videos by university students in their final class of a teacher education program had on those students. Each student created a short video reflecting on the process of how he/she became a teacher. An analysis of the videos…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ravishankar, C., Hughes Network Systems, Germantown, MD
Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfullymore » regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the coding techniques are equally applicable to any voice signal whether or not it carries any intelligible information, as the term speech implies. Other terms that are commonly used are speech compression and voice compression since the fundamental idea behind speech coding is to reduce (compress) the transmission rate (or equivalently the bandwidth) And/or reduce storage requirements In this document the terms speech and voice shall be used interchangeably.« less
78 FR 36478 - Accessibility of User Interfaces, and Video Programming Guides and Menus
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-18
... equipment: ``digital apparatus'' and ``navigation devices.'' Specifically, section 204 applies to ``digital... apparatus, including equipment purchased at retail by a consumer to access video programming, would be..., and video programming guides, and menus provided by digital apparatus and navigation devices are...
2013-05-01
Measurement of Full Field Strains in Filament Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC...of Full Field Strains in Filament Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC) Technique Todd C...Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC) Technique 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c
NASA Technical Reports Server (NTRS)
Sayood, K.; Chen, Y. C.; Wang, X.
1992-01-01
During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.
Video copy protection and detection framework (VPD) for e-learning systems
NASA Astrophysics Data System (ADS)
ZandI, Babak; Doustarmoghaddam, Danial; Pour, Mahsa R.
2013-03-01
This Article reviews and compares the copyright issues related to the digital video files, which can be categorized as contended based and Digital watermarking copy Detection. Then we describe how to protect a digital video by using a special Video data hiding method and algorithm. We also discuss how to detect the copy right of the file, Based on expounding Direction of the technology of the video copy detection, and Combining with the own research results, brings forward a new video protection and copy detection approach in terms of plagiarism and e-learning systems using the video data hiding technology. Finally we introduce a framework for Video protection and detection in e-learning systems (VPD Framework).
Deblocking of mobile stereo video
NASA Astrophysics Data System (ADS)
Azzari, Lucio; Gotchev, Atanas; Egiazarian, Karen
2012-02-01
Most of candidate methods for compression of mobile stereo video apply block-transform based compression based on the H-264 standard with quantization of transform coefficients driven by quantization parameter (QP). The compression ratio and the resulting bit rate are directly determined by the QP level and high compression is achieved for the price of visually noticeable blocking artifacts. Previous studies on perceived quality of mobile stereo video have revealed that blocking artifacts are the most annoying and most influential in the acceptance/rejection of mobile stereo video and can even completely cancel the 3D effect and the corresponding quality added value. In this work, we address the problem of deblocking of mobile stereo video. We modify a powerful non-local transform-domain collaborative filtering method originally developed for denoising of images and video. The method employs grouping of similar block patches residing in spatial and temporal vicinity of a reference block in filtering them collaboratively in a suitable transform domain. We study the most suitable way of finding similar patches in both channels of stereo video and suggest a hybrid four-dimensional transform to process the collected synchronized (stereo) volumes of grouped blocks. The results benefit from the additional correlation available between the left and right channel of the stereo video. Furthermore, addition sharpening is applied through an embedded alpha-rooting in transform domain, which improve the visual appearance of the deblocked frames.
Multi-pass encoding of hyperspectral imagery with spectral quality control
NASA Astrophysics Data System (ADS)
Wasson, Steven; Walker, William
2015-05-01
Multi-pass encoding is a technique employed in the field of video compression that maximizes the quality of an encoded video sequence within the constraints of a specified bit rate. This paper presents research where multi-pass encoding is extended to the field of hyperspectral image compression. Unlike video, which is primarily intended to be viewed by a human observer, hyperspectral imagery is processed by computational algorithms that generally attempt to classify the pixel spectra within the imagery. As such, these algorithms are more sensitive to distortion in the spectral dimension of the image than they are to perceptual distortion in the spatial dimension. The compression algorithm developed for this research, which uses the Karhunen-Loeve transform for spectral decorrelation followed by a modified H.264/Advanced Video Coding (AVC) encoder, maintains a user-specified spectral quality level while maximizing the compression ratio throughout the encoding process. The compression performance may be considered near-lossless in certain scenarios. For qualitative purposes, this paper presents the performance of the compression algorithm for several Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Hyperion datasets using spectral angle as the spectral quality assessment function. Specifically, the compression performance is illustrated in the form of rate-distortion curves that plot spectral angle versus bits per pixel per band (bpppb).
Wanner, Gregory K; Osborne, Arayel; Greene, Charlotte H
2016-11-29
Cardiopulmonary resuscitation (CPR) training has traditionally involved classroom-based courses or, more recently, home-based video self-instruction. These methods typically require preparation and purchase fee; which can dissuade many potential bystanders from receiving training. This study aimed to evaluate the effectiveness of teaching compression-only CPR to previously untrained individuals using our 6-min online CPR training video and skills practice on a homemade mannequin, reproduced by viewers with commonly available items (towel, toilet paper roll, t-shirt). Participants viewed the training video and practiced with the homemade mannequin. This was a parallel-design study with pre and post training evaluations of CPR skills (compression rate, depth, hand position, release), and hands-off time (time without compressions). CPR skills were evaluated using a sensor-equipped mannequin and two blinded CPR experts observed testing of participants. Twenty-four participants were included: 12 never-trained and 12 currently certified in CPR. Comparing pre and post training, the never-trained group had improvements in average compression rate per minute (64.3 to 103.9, p = 0.006), compressions with correct hand position in 1 min (8.3 to 54.3, p = 0.002), and correct compression release in 1 min (21.2 to 76.3, p < 0.001). The CPR-certified group had adequate pre and post-test compression rates (>100/min), but an improved number of compressions with correct release (53.5 to 94.7, p < 0.001). Both groups had significantly reduced hands-off time after training. Achieving adequate compression depths (>50 mm) remained problematic in both groups. Comparisons made between groups indicated significant improvements in compression depth, hand position, and hands-off time in never-trained compared to CPR-certified participants. Inter-rater agreement values were also calculated between the CPR experts and sensor-equipped mannequin. A brief internet-based video coupled with skill practice on a homemade mannequin improved compression-only CPR skills, especially in the previously untrained participants. This training method allows for widespread compression-only CPR training with a tactile learning component, without fees or advance preparation.
Bezanilla, F
1985-03-01
A modified digital audio processor, a video cassette recorder, and some simple added circuitry are assembled into a recording device of high capacity. The unit converts two analog channels into digital form at 44-kHz sampling rate and stores the information in digital form in a common video cassette. Bandwidth of each channel is from direct current to approximately 20 kHz and the dynamic range is close to 90 dB. The total storage capacity in a 3-h video cassette is 2 Gbytes. The information can be retrieved in analog or digital form.
Bezanilla, F
1985-01-01
A modified digital audio processor, a video cassette recorder, and some simple added circuitry are assembled into a recording device of high capacity. The unit converts two analog channels into digital form at 44-kHz sampling rate and stores the information in digital form in a common video cassette. Bandwidth of each channel is from direct current to approximately 20 kHz and the dynamic range is close to 90 dB. The total storage capacity in a 3-h video cassette is 2 Gbytes. The information can be retrieved in analog or digital form. PMID:3978213
A contourlet transform based algorithm for real-time video encoding
NASA Astrophysics Data System (ADS)
Katsigiannis, Stamos; Papaioannou, Georgios; Maroulis, Dimitris
2012-06-01
In recent years, real-time video communication over the internet has been widely utilized for applications like video conferencing. Streaming live video over heterogeneous IP networks, including wireless networks, requires video coding algorithms that can support various levels of quality in order to adapt to the network end-to-end bandwidth and transmitter/receiver resources. In this work, a scalable video coding and compression algorithm based on the Contourlet Transform is proposed. The algorithm allows for multiple levels of detail, without re-encoding the video frames, by just dropping the encoded information referring to higher resolution than needed. Compression is achieved by means of lossy and lossless methods, as well as variable bit rate encoding schemes. Furthermore, due to the transformation utilized, it does not suffer from blocking artifacts that occur with many widely adopted compression algorithms. Another highly advantageous characteristic of the algorithm is the suppression of noise induced by low-quality sensors usually encountered in web-cameras, due to the manipulation of the transform coefficients at the compression stage. The proposed algorithm is designed to introduce minimal coding delay, thus achieving real-time performance. Performance is enhanced by utilizing the vast computational capabilities of modern GPUs, providing satisfactory encoding and decoding times at relatively low cost. These characteristics make this method suitable for applications like video-conferencing that demand real-time performance, along with the highest visual quality possible for each user. Through the presented performance and quality evaluation of the algorithm, experimental results show that the proposed algorithm achieves better or comparable visual quality relative to other compression and encoding methods tested, while maintaining a satisfactory compression ratio. Especially at low bitrates, it provides more human-eye friendly images compared to algorithms utilizing block-based coding, like the MPEG family, as it introduces fuzziness and blurring instead of artificial block artifacts.
Integrating Digital Video Technology in the Classroom
ERIC Educational Resources Information Center
Lim, Jon; Pellett, Heidi Henschel; Pellett, Tracy
2009-01-01
Digital video technology can be a powerful tool for teaching and learning. It enables students to develop a variety of skills including research, communication, decision-making, problem-solving, and other higher-order critical-thinking skills. In addition, digital video technology has the potential to enrich university classroom curricula, enhance…
2012 ARPA-E Energy Innovation Summit: Profiling General Compression: A River of Wind
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marcus, David; Ingersoll, Eric
The third annual ARPA-E Energy Innovation Summit was held in Washington D.C. in February, 2012. The event brought together key players from across the energy ecosystem - researchers, entrepreneurs, investors, corporate executives, and government officials - to share ideas for developing and deploying the next generation of energy technologies. A few videos were selected for showing during the Summit to attendees. These 'performer videos' highlight innovative research that is ongoing and related to the main topics of the Summit's sessions. Featured in this video are David Marcus, Founder of General Compression, and Eric Ingersoll, CEO of General Compression. General Compression,more » with the help of ARPA-E funding, has created an advanced air compression process which can store and release more than a weeks worth of the energy generated by wind turbines.« less
2012 ARPA-E Energy Innovation Summit: Profiling General Compression: A River of Wind
Marcus, David; Ingersoll, Eric
2018-05-30
The third annual ARPA-E Energy Innovation Summit was held in Washington D.C. in February, 2012. The event brought together key players from across the energy ecosystem - researchers, entrepreneurs, investors, corporate executives, and government officials - to share ideas for developing and deploying the next generation of energy technologies. A few videos were selected for showing during the Summit to attendees. These 'performer videos' highlight innovative research that is ongoing and related to the main topics of the Summit's sessions. Featured in this video are David Marcus, Founder of General Compression, and Eric Ingersoll, CEO of General Compression. General Compression, with the help of ARPA-E funding, has created an advanced air compression process which can store and release more than a weeks worth of the energy generated by wind turbines.
SHD digital cinema distribution over a long distance network of Internet2
NASA Astrophysics Data System (ADS)
Yamaguchi, Takahiro; Shirai, Daisuke; Fujii, Tatsuya; Nomura, Mitsuru; Fujii, Tetsuro; Ono, Sadayasu
2003-06-01
We have developed a prototype SHD (Super High Definition) digital cinema distribution system that can store, transmit and display eight-million-pixel motion pictures that have the image quality of a 35-mm film movie. The system contains a video server, a real-time decoder, and a D-ILA projector. Using a gigabit Ethernet link and TCP/IP, the server transmits JPEG2000 compressed motion picture data streams to the decoder at transmission speeds as high as 300 Mbps. The received data streams are decompressed by the decoder, and then projected onto a screen via the projector. With this system, digital cinema contents can be distributed over a wide-area optical gigabit IP network. However, when digital cinema contents are delivered over long distances by using a gigabit IP network and TCP, the round-trip time increases and network throughput either stops rising or diminishes. In a long-distance SHD digital cinema transmission experiment performed on the Internet2 network in October 2002, we adopted enlargement of the TCP window, multiple TCP connections, and shaping function to control the data transmission quantity. As a result, we succeeded in transmitting the SHD digital cinema content data at about 300 Mbps between Chicago and Los Angeles, a distance of more than 3000 km.
A joint source-channel distortion model for JPEG compressed images.
Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C
2006-06-01
The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.
Establishing a gold standard for manual cough counting: video versus digital audio recordings
Smith, Jaclyn A; Earis, John E; Woodcock, Ashley A
2006-01-01
Background Manual cough counting is time-consuming and laborious; however it is the standard to which automated cough monitoring devices must be compared. We have compared manual cough counting from video recordings with manual cough counting from digital audio recordings. Methods We studied 8 patients with chronic cough, overnight in laboratory conditions (diagnoses were 5 asthma, 1 rhinitis, 1 gastro-oesophageal reflux disease and 1 idiopathic cough). Coughs were recorded simultaneously using a video camera with infrared lighting and digital sound recording. The numbers of coughs in each 8 hour recording were counted manually, by a trained observer, in real time from the video recordings and using audio-editing software from the digital sound recordings. Results The median cough frequency was 17.8 (IQR 5.9–28.7) cough sounds per hour in the video recordings and 17.7 (6.0–29.4) coughs per hour in the digital sound recordings. There was excellent agreement between the video and digital audio cough rates; mean difference of -0.3 coughs per hour (SD ± 0.6), 95% limits of agreement -1.5 to +0.9 coughs per hour. Video recordings had poorer sound quality even in controlled conditions and can only be analysed in real time (8 hours per recording). Digital sound recordings required 2–4 hours of analysis per recording. Conclusion Manual counting of cough sounds from digital audio recordings has excellent agreement with simultaneous video recordings in laboratory conditions. We suggest that ambulatory digital audio recording is therefore ideal for validating future cough monitoring devices, as this as this can be performed in the patients own environment. PMID:16887019
Streaming Audio and Video: New Challenges and Opportunities for Museums.
ERIC Educational Resources Information Center
Spadaccini, Jim
Streaming audio and video present new challenges and opportunities for museums. Streaming media is easier to author and deliver to Internet audiences than ever before; digital video editing is commonplace now that the tools--computers, digital video cameras, and hard drives--are so affordable; the cost of serving video files across the Internet…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaunt, R.
1997-05-01
An international standard has emerged for the first true multimedia format. Digital Versatile Disk (by its official name), you may know it as Digital Video Disks. DVD has applications in movies, music, games, information CD-ROMS, and many other areas where massive amounts of digital information is needed. Did I say massive amounts of data? Would you believe over 17 gigabytes on a single piece of plastic the size of an audio-CD? That`s the promise, at least, by the group of nine electronics manufacturers who have agreed to the format specification, and who hope to make this goal a reality bymore » 1998. In this major agreement, which didn`t come easily, the manufacturers will combine Sony and Phillip`s one side double-layer NMCD format with Toshiba and Matsushita`s double sided Super-Density disk. By Spring of this year, they plan to market the first 4.7 gigabyte units. The question is: Will DVD take off? Some believe that read-only disks recorded with movies will be about as popular as video laser disks. They say that until the eraseable/writable DVD arrives, the consumer will most likely not buy it. Also, DVD has a good market for replacement of CD- Roms. Back in the early 80`s, the international committee deciding the format of the audio compact disk decided its length would be 73 minutes. This, they declared, would allow Beethoven`s 9th Symphony to be contained entirely on a single CD. Similarly, today it was agreed that playback length of a single sided, single layer DVD would be 133 minutes, long enough to hold 94% of all feature-length movies. Further, audio can be in Dolby`s AC-3 stereo or 5.1 tracks of surround sound, better than CD-quality audio (16-bits at 48kHz). In addition, there are three to five language tracks, copy protection and parental ``locks`` for R rated movies. DVD will be backwards compatible with current CD-ROM and audio CD formats. Added versatility comes by way of multiple aspect rations: 4:3 pan-scan, 4:3 letterbox, and 16:9 widescreen. MPEG-2 is the selected image compression format, with full ITU Rec. 601 video resolution (72Ox480). MPEG-2 and AC-3 are also part of the U.S. high definition Advance Television standard (ATV). DVD has an average video bit rate of 3.5 Mbits/sec or 4.69Mbits/sec for image and sound. Unlike digital television transmission, which will use fixed length packets for audio and video, DVD will use variable length packets with a maximum throughput of more than 1OMbits/sec. The higher bit rate allows for less compression of difficult to encode material. Even with all the compression, narrow-beam red light lasers are required to significantly increase the physical data density of a platter by decreasing the size of the pits. This allows 4.7 gigabytes of data on a single sided, single layer DVD. The maximum 17 gigabyte capacity is achieved by employing two reflective layers on both sides of the disk. To read the imbedded layer of data, the laser`s focal length is altered so that the top layer pits are not picked up by the reader. It will be a couple of years before we have dual-layer, double-sided DVDS, and it will be achieved in four stages. The first format to appear will be the single sided, single layer disk (4.7 gigabytes). That will allow Hollywood to begin releasing DVD movie titles. DVD-ROM will be the next phase, allowing 4.7 gigabytes of CD-ROM-like content. The third stage will be write-once disks, and stage four will be rewritable disks. These last stages presents some issues which have yet to be resolved. For one, copyrighted materials may have some form of payment system, and there is the issue that erasable disks reflect less light than today`s DVDS. The problem here is that their data most likely will not be readable on earlier built players.« less
Backwards compatible high dynamic range video compression
NASA Astrophysics Data System (ADS)
Dolzhenko, Vladimir; Chesnokov, Vyacheslav; Edirisinghe, Eran A.
2014-02-01
This paper presents a two layer CODEC architecture for high dynamic range video compression. The base layer contains the tone mapped video stream encoded with 8 bits per component which can be decoded using conventional equipment. The base layer content is optimized for rendering on low dynamic range displays. The enhancement layer contains the image difference, in perceptually uniform color space, between the result of inverse tone mapped base layer content and the original video stream. Prediction of the high dynamic range content reduces the redundancy in the transmitted data while still preserves highlights and out-of-gamut colors. Perceptually uniform colorspace enables using standard ratedistortion optimization algorithms. We present techniques for efficient implementation and encoding of non-uniform tone mapping operators with low overhead in terms of bitstream size and number of operations. The transform representation is based on human vision system model and suitable for global and local tone mapping operators. The compression techniques include predicting the transform parameters from previously decoded frames and from already decoded data for current frame. Different video compression techniques are compared: backwards compatible and non-backwards compatible using AVC and HEVC codecs.
ERIC Educational Resources Information Center
Ludlow, Barbara L.; Foshay, John B.; Duff, Michael C.
Video presentations of teaching episodes in home, school, and community settings and audio recordings of parents' and professionals' views can be important adjuncts to personnel preparation in special education. This paper describes instructional applications of digital media and outlines steps in producing audio and video segments. Digital audio…
Digital Video and the Internet: A Powerful Combination.
ERIC Educational Resources Information Center
Barron, Ann E.; Orwig, Gary W.
1995-01-01
Provides an overview of digital video and outlines hardware and software necessary for interactive training on the World Wide Web and for videoconferences via the Internet. Lists sites providing additional information on digital video, on CU-SeeMe software, and on MBONE (Multicast BackBONE), a technology that permits real-time transmission of…
Exploring Factors Influencing Acceptance and Use of Video Digital Libraries
ERIC Educational Resources Information Center
Ju, Boryung; Albertson, Dan
2018-01-01
Introduction: This study examines the effects of certain key factors on users' intention to ultimately adopt and use video digital libraries for facilitating their information needs. The individual factors identified for this study, based on their given potential to influence use and acceptance of video digital libraries, were categorised for data…
The Use of Digital Video in Physical Education
ERIC Educational Resources Information Center
Weir, Tony; Connor, Sean
2009-01-01
This paper details the technical and operational aspects of a project investigating the role of digital video in physical education in 12 Irish schools over a period of two academic years. The project design involved a qualitative investigation into the use of digital video in three areas of physical education, namely teaching, learning and…
Facilitating Digital Video Production in the Language Arts Curriculum
ERIC Educational Resources Information Center
McKenney, Susan; Voogt, Joke
2011-01-01
Two studies were conducted to facilitate the development of feasible support for the process of integrating digital video making activities in the primary school language arts curriculum. The first study explored which teaching supports would be necessary to enable primary school children to create digital video as a means of fostering…
State Skill Standards: Digital Video & Broadcast Production
ERIC Educational Resources Information Center
Bullard, Susan; Tanner, Robin; Reedy, Brian; Grabavoi, Daphne; Ertman, James; Olson, Mark; Vaughan, Karen; Espinola, Ron
2007-01-01
The standards in this document are for digital video and broadcast production programs and are designed to clearly state what the student should know and be able to do upon completion of an advanced high-school program. Digital Video and Broadcast Production is a program that consists of the initial fundamentals and sequential courses that prepare…
Effect of data compression on diagnostic accuracy in digital hand and chest radiography
NASA Astrophysics Data System (ADS)
Sayre, James W.; Aberle, Denise R.; Boechat, Maria I.; Hall, Theodore R.; Huang, H. K.; Ho, Bruce K. T.; Kashfian, Payam; Rahbar, Guita
1992-05-01
Image compression is essential to handle a large volume of digital images including CT, MR, CR, and digitized films in a digital radiology operation. The full-frame bit allocation using the cosine transform technique developed during the last few years has been proven to be an excellent irreversible image compression method. This paper describes the effect of using the hardware compression module on diagnostic accuracy in hand radiographs with subperiosteal resorption and chest radiographs with interstitial disease. Receiver operating characteristic analysis using 71 hand radiographs and 52 chest radiographs with five observers each demonstrates that there is no statistical significant difference in diagnostic accuracy between the original films and the compressed images with a compression ratio as high as 20:1.
NASA Astrophysics Data System (ADS)
Habibi, Ali
1993-01-01
The objective of this article is to present a discussion on the future of image data compression in the next two decades. It is virtually impossible to predict with any degree of certainty the breakthroughs in theory and developments, the milestones in advancement of technology and the success of the upcoming commercial products in the market place which will be the main factors in establishing the future stage to image coding. What we propose to do, instead, is look back at the progress in image coding during the last two decades and assess the state of the art in image coding today. Then, by observing the trends in developments of theory, software, and hardware coupled with the future needs for use and dissemination of imagery data and the constraints on the bandwidth and capacity of various networks, predict the future state of image coding. What seems to be certain today is the growing need for bandwidth compression. The television is using a technology which is half a century old and is ready to be replaced by high definition television with an extremely high digital bandwidth. Smart telephones coupled with personal computers and TV monitors accommodating both printed and video data will be common in homes and businesses within the next decade. Efficient and compact digital processing modules using developing technologies will make bandwidth compressed imagery the cheap and preferred alternative in satellite and on-board applications. In view of the above needs, we expect increased activities in development of theory, software, special purpose chips and hardware for image bandwidth compression in the next two decades. The following sections summarize the future trends in these areas.
Performance evaluation of the intra compression in the video coding standards
NASA Astrophysics Data System (ADS)
Abramowski, Andrzej
2015-09-01
The article presents a comparison of the Intra prediction algorithms in the current state-of-the-art video coding standards, including MJPEG 2000, VP8, VP9, H.264/AVC and H.265/HEVC. The effectiveness of techniques employed by each standard is evaluated in terms of compression efficiency and average encoding time. The compression efficiency is measured using BD-PSNR and BD-RATE metrics with H.265/HEVC results as an anchor. Tests are performed on a set of video sequences, composed of sequences gathered by Joint Collaborative Team on Video Coding during the development of the H.265/HEVC standard and 4K sequences provided by Ultra Video Group. According to results, H.265/HEVC provides significant bit-rate savings at the expense of computational complexity, while VP9 may be regarded as a compromise between the efficiency and required encoding time.
Method and system for efficient video compression with low-complexity encoder
NASA Technical Reports Server (NTRS)
Chen, Jun (Inventor); He, Dake (Inventor); Sheinin, Vadim (Inventor); Jagmohan, Ashish (Inventor); Lu, Ligang (Inventor)
2012-01-01
Disclosed are a method and system for video compression, wherein the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a video decoder, wherein the method for encoding includes the steps of converting a source frame into a space-frequency representation; estimating conditional statistics of at least one vector of space-frequency coefficients; estimating encoding rates based on the said conditional statistics; and applying Slepian-Wolf codes with the said computed encoding rates. The preferred method for decoding includes the steps of; generating a side-information vector of frequency coefficients based on previously decoded source data, encoder statistics, and previous reconstructions of the source frequency vector; and performing Slepian-Wolf decoding of at least one source frequency vector based on the generated side-information, the Slepian-Wolf code bits and the encoder statistics.
Comparative data compression techniques and multi-compression results
NASA Astrophysics Data System (ADS)
Hasan, M. R.; Ibrahimy, M. I.; Motakabber, S. M. A.; Ferdaus, M. M.; Khan, M. N. H.
2013-12-01
Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms.
High efficiency video coding for ultrasound video communication in m-health systems.
Panayides, A; Antoniou, Z; Pattichis, M S; Pattichis, C S; Constantinides, A G
2012-01-01
Emerging high efficiency video compression methods and wider availability of wireless network infrastructure will significantly advance existing m-health applications. For medical video communications, the emerging video compression and network standards support low-delay and high-resolution video transmission, at the clinically acquired resolution and frame rates. Such advances are expected to further promote the adoption of m-health systems for remote diagnosis and emergency incidents in daily clinical practice. This paper compares the performance of the emerging high efficiency video coding (HEVC) standard to the current state-of-the-art H.264/AVC standard. The experimental evaluation, based on five atherosclerotic plaque ultrasound videos encoded at QCIF, CIF, and 4CIF resolutions demonstrates that 50% reductions in bitrate requirements is possible for equivalent clinical quality.
Audiovisual focus of attention and its application to Ultra High Definition video compression
NASA Astrophysics Data System (ADS)
Rerabek, Martin; Nemoto, Hiromi; Lee, Jong-Seok; Ebrahimi, Touradj
2014-02-01
Using Focus of Attention (FoA) as a perceptual process in image and video compression belongs to well-known approaches to increase coding efficiency. It has been shown that foveated coding, when compression quality varies across the image according to region of interest, is more efficient than the alternative coding, when all region are compressed in a similar way. However, widespread use of such foveated compression has been prevented due to two main conflicting causes, namely, the complexity and the efficiency of algorithms for FoA detection. One way around these is to use as much information as possible from the scene. Since most video sequences have an associated audio, and moreover, in many cases there is a correlation between the audio and the visual content, audiovisual FoA can improve efficiency of the detection algorithm while remaining of low complexity. This paper discusses a simple yet efficient audiovisual FoA algorithm based on correlation of dynamics between audio and video signal components. Results of audiovisual FoA detection algorithm are subsequently taken into account for foveated coding and compression. This approach is implemented into H.265/HEVC encoder producing a bitstream which is fully compliant to any H.265/HEVC decoder. The influence of audiovisual FoA in the perceived quality of high and ultra-high definition audiovisual sequences is explored and the amount of gain in compression efficiency is analyzed.
Writing/Thinking in Real Time: Digital Video and Corpus Query Analysis
ERIC Educational Resources Information Center
Park, Kwanghyun; Kinginger, Celeste
2010-01-01
The advance of digital video technology in the past two decades facilitates empirical investigation of learning in real time. The focus of this paper is the combined use of real-time digital video and a networked linguistic corpus for exploring the ways in which these technologies enhance our capability to investigate the cognitive process of…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-26
... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] In the Matter of Digital Video Systems, Inc., Geocom Resources, Inc., and GoldMountain Exploration Corp., and Real Data, Inc. (a/k/a Galtech... securities of Digital Video Systems, Inc. because it has not filed any periodic reports since the period...
The Effect of Digital Video Games on EFL Students' Language Learning Motivation
ERIC Educational Resources Information Center
Ebrahimzadeh, Mohsen; Alavi, Sepideh
2017-01-01
The study examined the effect of a commercial digital video game on high school students' language learning motivation. Participants were 241 male students randomly assigned to one of the following three treatments: Readers, who intensively read the game's story; Players, who played the digital video game; and Watchers, who watched two classmates…
A Learning Design for Student-Generated Digital Storytelling
ERIC Educational Resources Information Center
Kearney, Matthew
2011-01-01
The literature on digital video in education emphasises the use of pre-fabricated, instructional-style video assets. Learning designs for supporting the use of these expert-generated video products have been developed. However, there has been a paucity of pedagogical frameworks for facilitating specific genres of learner-generated video projects.…
26 CFR 1.181-3 - Qualified film or television production.
Code of Federal Regulations, 2012 CFR
2012-04-01
... any motion picture film or video tape (including digital video) production the production costs of... person acquires a completed motion picture film or video tape (including digital video) that the seller... include property for which records are required to be maintained under 18 U.S.C. 2257. (c) Compensation...
26 CFR 1.181-3 - Qualified film or television production.
Code of Federal Regulations, 2014 CFR
2014-04-01
... any motion picture film or video tape (including digital video) production the production costs of... person acquires a completed motion picture film or video tape (including digital video) that the seller... include property for which records are required to be maintained under 18 U.S.C. 2257. (c) Compensation...
26 CFR 1.181-3 - Qualified film or television production.
Code of Federal Regulations, 2013 CFR
2013-04-01
... any motion picture film or video tape (including digital video) production the production costs of... person acquires a completed motion picture film or video tape (including digital video) that the seller... include property for which records are required to be maintained under 18 U.S.C. 2257. (c) Compensation...
A digital audio/video interleaving system. [for Shuttle Orbiter
NASA Technical Reports Server (NTRS)
Richards, R. W.
1978-01-01
A method of interleaving an audio signal with its associated video signal for simultaneous transmission or recording, and the subsequent separation of the two signals, is described. Comparisons are made between the new audio signal interleaving system and the Skylab Pam audio/video interleaving system, pointing out improvements gained by using the digital audio/video interleaving system. It was found that the digital technique is the simplest, most effective and most reliable method for interleaving audio and/or other types of data into the video signal for the Shuttle Orbiter application. Details of the design of a multiplexer capable of accommodating two basic data channels, each consisting of a single 31.5-kb/s digital bit stream are given. An adaptive slope delta modulation system is introduced to digitize audio signals, producing a high immunity of work intelligibility to channel errors, primarily due to the robust nature of the delta-modulation algorithm.
Cheremkhin, Pavel A; Kurbatova, Ekaterina A
2018-01-01
Compression of digital holograms can significantly help with the storage of objects and data in 2D and 3D form, its transmission, and its reconstruction. Compression of standard images by methods based on wavelets allows high compression ratios (up to 20-50 times) with minimum losses of quality. In the case of digital holograms, application of wavelets directly does not allow high values of compression to be obtained. However, additional preprocessing and postprocessing can afford significant compression of holograms and the acceptable quality of reconstructed images. In this paper application of wavelet transforms for compression of off-axis digital holograms are considered. The combined technique based on zero- and twin-order elimination, wavelet compression of the amplitude and phase components of the obtained Fourier spectrum, and further additional compression of wavelet coefficients by thresholding and quantization is considered. Numerical experiments on reconstruction of images from the compressed holograms are performed. The comparative analysis of applicability of various wavelets and methods of additional compression of wavelet coefficients is performed. Optimum parameters of compression of holograms by the methods can be estimated. Sizes of holographic information were decreased up to 190 times.
Examining the Effectiveness of Digital Video Recordings on Oral Performance of EFL Learners
ERIC Educational Resources Information Center
Göktürk, Nazlinur
2016-01-01
This study reports the results of an action-based study conducted in an EFL class to examine whether digital video recordings would contribute to the enhancement of EFL learners' oral fluency skills. It also investigates the learners' perceptions of the use of digital video recordings in a speaking class. 10 Turkish EFL learners participated in…
Viewing Michigan's Digital Future: Results of a Survey of Educators' Use of Digital Video in the USA
ERIC Educational Resources Information Center
Mardis, Marcia A.
2009-01-01
Digital video is a growing and important presence in student learning. This paper reports the results of a survey of American educators in Michigan (n = 426) conducted in spring 2008. The survey included questions about educators' attitudes toward the streaming and downloadable video services available to them in their schools. The survey results…
Shafer, Paul R; Rodes, Robert; Kim, Annice; Hansen, Heather; Patel, Deesha; Coln, Caryn; Beistle, Diane
2016-01-01
Background Federal and state public health agencies in the United States are increasingly using digital advertising and social media to promote messages from broader multimedia campaigns. However, little evidence exists on population-level campaign awareness and relative cost efficiencies of digital advertising in the context of a comprehensive public health education campaign. Objective Our objective was to compare the impact of increased doses of digital video and television advertising from the 2013 Tips From Former Smokers (Tips) campaign on overall campaign awareness at the population level. We also compared the relative cost efficiencies across these media platforms. Methods We used data from a large national online survey of approximately 15,000 US smokers conducted in 2013 immediately after the conclusion of the 2013 Tips campaign. These data were used to compare the effects of variation in media dose of digital video and television advertising on population-level awareness of the Tips campaign. We implemented higher doses of digital video among selected media markets and randomly selected other markets to receive similar higher doses of television ads. Multivariate logistic regressions estimated the odds of overall campaign awareness via digital or television format as a function of higher-dose media in each market area. All statistical tests used the .05 threshold for statistical significance and the .10 level for marginal nonsignificance. We used adjusted advertising costs for the additional doses of digital and television advertising to compare the cost efficiencies of digital and television advertising on the basis of costs per percentage point of population awareness generated. Results Higher-dose digital video advertising was associated with 94% increased odds of awareness of any ad online relative to standard-dose markets (P<.001). Higher-dose digital advertising was associated with a marginally nonsignificant increase (46%) in overall campaign awareness regardless of media format (P=.09). Higher-dose television advertising was associated with 81% increased odds of overall ad awareness regardless of media format (P<.001). Increased doses of television advertising were also associated with significantly higher odds of awareness of any ad on television (P<.001) and online (P=.04). The adjusted cost of each additional percentage point of population-level reach generated by higher doses of advertising was approximately US $440,000 for digital advertising and US $1 million for television advertising. Conclusions Television advertising generated relatively higher levels of overall campaign awareness. However, digital video was relatively more cost efficient for generating awareness. These results suggest that digital video may be used as a cost-efficient complement to traditional advertising modes (eg, television), but digital video should not replace television given the relatively smaller audience size of digital video viewers. PMID:27627853
Davis, Kevin C; Shafer, Paul R; Rodes, Robert; Kim, Annice; Hansen, Heather; Patel, Deesha; Coln, Caryn; Beistle, Diane
2016-09-14
Federal and state public health agencies in the United States are increasingly using digital advertising and social media to promote messages from broader multimedia campaigns. However, little evidence exists on population-level campaign awareness and relative cost efficiencies of digital advertising in the context of a comprehensive public health education campaign. Our objective was to compare the impact of increased doses of digital video and television advertising from the 2013 Tips From Former Smokers (Tips) campaign on overall campaign awareness at the population level. We also compared the relative cost efficiencies across these media platforms. We used data from a large national online survey of approximately 15,000 US smokers conducted in 2013 immediately after the conclusion of the 2013 Tips campaign. These data were used to compare the effects of variation in media dose of digital video and television advertising on population-level awareness of the Tips campaign. We implemented higher doses of digital video among selected media markets and randomly selected other markets to receive similar higher doses of television ads. Multivariate logistic regressions estimated the odds of overall campaign awareness via digital or television format as a function of higher-dose media in each market area. All statistical tests used the .05 threshold for statistical significance and the .10 level for marginal nonsignificance. We used adjusted advertising costs for the additional doses of digital and television advertising to compare the cost efficiencies of digital and television advertising on the basis of costs per percentage point of population awareness generated. Higher-dose digital video advertising was associated with 94% increased odds of awareness of any ad online relative to standard-dose markets (P<.001). Higher-dose digital advertising was associated with a marginally nonsignificant increase (46%) in overall campaign awareness regardless of media format (P=.09). Higher-dose television advertising was associated with 81% increased odds of overall ad awareness regardless of media format (P<.001). Increased doses of television advertising were also associated with significantly higher odds of awareness of any ad on television (P<.001) and online (P=.04). The adjusted cost of each additional percentage point of population-level reach generated by higher doses of advertising was approximately US $440,000 for digital advertising and US $1 million for television advertising. Television advertising generated relatively higher levels of overall campaign awareness. However, digital video was relatively more cost efficient for generating awareness. These results suggest that digital video may be used as a cost-efficient complement to traditional advertising modes (eg, television), but digital video should not replace television given the relatively smaller audience size of digital video viewers.
Study and simulation of low rate video coding schemes
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Chen, Yun-Chung; Kipp, G.
1992-01-01
The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design.
Fundamental study of compression for movie files of coronary angiography
NASA Astrophysics Data System (ADS)
Ando, Takekazu; Tsuchiya, Yuichiro; Kodera, Yoshie
2005-04-01
When network distribution of movie files was considered as reference, it could be useful that the lossy compression movie files which has small file size. We chouse three kinds of coronary stricture movies with different moving speed as an examination object; heart rate of slow, normal and fast movies. The movies of MPEG-1, DivX5.11, WMV9 (Windows Media Video 9), and WMV9-VCM (Windows Media Video 9-Video Compression Manager) were made from three kinds of AVI format movies with different moving speeds. Five kinds of movies that are four kinds of compression movies and non-compression AVI instead of the DICOM format were evaluated by Thurstone's method. The Evaluation factors of movies were determined as "sharpness, granularity, contrast, and comprehensive evaluation." In the virtual bradycardia movie, AVI was the best evaluation at all evaluation factors except the granularity. In the virtual normal movie, an excellent compression technique is different in all evaluation factors. In the virtual tachycardia movie, MPEG-1 was the best evaluation at all evaluation factors expects the contrast. There is a good compression form depending on the speed of movies because of the difference of compression algorithm. It is thought that it is an influence by the difference of the compression between frames. The compression algorithm for movie has the compression between the frames and the intra-frame compression. As the compression algorithm give the different influence to image by each compression method, it is necessary to examine the relation of the compression algorithm and our results.
High-performance software-only H.261 video compression on PC
NASA Astrophysics Data System (ADS)
Kasperovich, Leonid
1996-03-01
This paper describes an implementation of a software H.261 codec for PC, that takes an advantage of the fast computational algorithms for DCT-based video compression, which have been presented by the author at the February's 1995 SPIE/IS&T meeting. The motivation for developing the H.261 prototype system is to demonstrate a feasibility of real time software- only videoconferencing solution to operate across a wide range of network bandwidth, frame rate, and resolution of the input video. As the bandwidths of current network technology will be increased, the higher frame rate and resolution of video to be transmitted is allowed, that requires, in turn, a software codec to be able to compress pictures of CIF (352 X 288) resolution at up to 30 frame/sec. Running on Pentium 133 MHz PC the codec presented is capable to compress video in CIF format at 21 - 23 frame/sec. This result is comparable to the known hardware-based H.261 solutions, but it doesn't require any specific hardware. The methods to achieve high performance, the program optimization technique for Pentium microprocessor along with the performance profile, showing the actual contribution of the different encoding/decoding stages to the overall computational process, are presented.
NASA Astrophysics Data System (ADS)
Seeram, Euclid
2006-03-01
The large volumes of digital images produced by digital imaging modalities in Radiology have provided the motivation for the development of picture archiving and communication systems (PACS) in an effort to provide an organized mechanism for digital image management. The development of more sophisticated methods of digital image acquisition (Multislice CT and Digital Mammography, for example), as well as the implementation and performance of PACS and Teleradiology systems in a health care environment, have created challenges in the area of image compression with respect to storing and transmitting digital images. Image compression can be reversible (lossless) or irreversible (lossy). While in the former, there is no loss of information, the latter presents concerns since there is a loss of information. This loss of information from diagnostic medical images is of primary concern not only to radiologists, but also to patients and their physicians. In 1997, Goldberg pointed out that "there is growing evidence that lossy compression can be applied without significantly affecting the diagnostic content of images... there is growing consensus in the radiologic community that some forms of lossy compression are acceptable". The purpose of this study was to explore the opinions of expert radiologists, and related professional organizations on the use of irreversible compression in routine practice The opinions of notable radiologists in the US and Canada are varied indicating no consensus of opinion on the use of irreversible compression in primary diagnosis, however, they are generally positive on the notion of the image storage and transmission advantages. Almost all radiologists are concerned with the litigation potential of an incorrect diagnosis based on irreversible compressed images. The survey of several radiology professional and related organizations reveals that no professional practice standards exist for the use of irreversible compression. Currently, the only standard for image compression is stated in the ACR's Technical Standards for Teleradiology and Digital Image Management.
Two-thumb technique is superior to two-finger technique during lone rescuer infant manikin CPR.
Udassi, Sharda; Udassi, Jai P; Lamb, Melissa A; Theriaque, Douglas W; Shuster, Jonathan J; Zaritsky, Arno L; Haque, Ikram U
2010-06-01
Infant CPR guidelines recommend two-finger chest compression with a lone rescuer and two-thumb with two rescuers. Two-thumb provides better chest compression but is perceived to be associated with increased ventilation hands-off time. We hypothesized that lone rescuer two-thumb CPR is associated with increased ventilation cycle time, decreased ventilation quality and fewer chest compressions compared to two-finger CPR in an infant manikin model. Crossover observational study randomizing 34 healthcare providers to perform 2 min CPR at a compression rate of 100 min(-1) using a 30:2 compression:ventilation ratio comparing two-thumb vs. two-finger techniques. A Laerdal Baby ALS Trainer manikin was modified to digitally record compression rate, compression depth and compression pressure and ventilation cycle time (two mouth-to-mouth breaths). Manikin chest rise with breaths was video recorded and later reviewed by two blinded CPR instructors for percent effective breaths. Data (mean+/-SD) were analyzed using a two-tailed paired t-test. Significance was defined qualitatively as p< or =0.05. Mean % effective breaths were 90+/-18.6% in two-thumb and 88.9+/-21.1% in two-finger, p=0.65. Mean time (s) to deliver two mouth-to-mouth breaths was 7.6+/-1.6 in two-thumb and 7.0+/-1.5 in two-finger, p<0.0001. Mean delivered compressions per minute were 87+/-11 in two-thumb and 92+/-12 in two-finger, p=0.0005. Two-thumb resulted in significantly higher compression depth and compression pressure compared to the two-finger technique. Healthcare providers required 0.6s longer time to deliver two breaths during two-thumb lone rescuer infant CPR, but there was no significant difference in percent effective breaths delivered between the two techniques. Two-thumb CPR had 4 fewer delivered compressions per minute, which may be offset by far more effective compression depth and compression pressure compared to two-finger technique. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
Digital Video Revisited: Storytelling, Conferencing, Remixing
ERIC Educational Resources Information Center
Godwin-Jones, Robert
2012-01-01
Five years ago in the February, 2007, issue of LLT, I wrote about developments in digital video of potential interest to language teachers. Since then, there have been major changes in options for video capture, editing, and delivery. One of the most significant has been the rise in popularity of video-based storytelling, enabled largely by…
ERIC Educational Resources Information Center
Campbell, Laurie O.; Cox, Thomas D.
2018-01-01
Students within this study followed the ICSDR (Identify, Conceptualize/Connect, Storyboard, Develop, Review/Reflect/Revise) development model to create digital video, as a personalized and active learning assignment. The participants, graduate students in education, indicated that following the ICSDR framework for student-authored video guided…
47 CFR 79.109 - Activating accessibility features.
Code of Federal Regulations, 2014 CFR
2014-10-01
... ACCESSIBILITY OF VIDEO PROGRAMMING Apparatus § 79.109 Activating accessibility features. (a) Requirements... video programming transmitted in digital format simultaneously with sound, including apparatus designed to receive or display video programming transmitted in digital format using Internet protocol, with...
Radiological Image Compression
NASA Astrophysics Data System (ADS)
Lo, Shih-Chung Benedict
The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.
The Compressed Video Experience.
ERIC Educational Resources Information Center
Weber, John
In the fall semester 1995, Southern Arkansas University- Magnolia (SAU-M) began a two semester trial delivering college classes via a compressed video link between SAU-M and its sister school Southern Arkansas University Tech (SAU-T) in Camden. As soon as the University began broadcasting and receiving classes, it was discovered that using the…
Motmot, an open-source toolkit for realtime video acquisition and analysis.
Straw, Andrew D; Dickinson, Michael H
2009-07-22
Video cameras sense passively from a distance, offer a rich information stream, and provide intuitively meaningful raw data. Camera-based imaging has thus proven critical for many advances in neuroscience and biology, with applications ranging from cellular imaging of fluorescent dyes to tracking of whole-animal behavior at ecologically relevant spatial scales. Here we present 'Motmot': an open-source software suite for acquiring, displaying, saving, and analyzing digital video in real-time. At the highest level, Motmot is written in the Python computer language. The large amounts of data produced by digital cameras are handled by low-level, optimized functions, usually written in C. This high-level/low-level partitioning and use of select external libraries allow Motmot, with only modest complexity, to perform well as a core technology for many high-performance imaging tasks. In its current form, Motmot allows for: (1) image acquisition from a variety of camera interfaces (package motmot.cam_iface), (2) the display of these images with minimal latency and computer resources using wxPython and OpenGL (package motmot.wxglvideo), (3) saving images with no compression in a single-pass, low-CPU-use format (package motmot.FlyMovieFormat), (4) a pluggable framework for custom analysis of images in realtime and (5) firmware for an inexpensive USB device to synchronize image acquisition across multiple cameras, with analog input, or with other hardware devices (package motmot.fview_ext_trig). These capabilities are brought together in a graphical user interface, called 'FView', allowing an end user to easily view and save digital video without writing any code. One plugin for FView, 'FlyTrax', which tracks the movement of fruit flies in real-time, is included with Motmot, and is described to illustrate the capabilities of FView. Motmot enables realtime image processing and display using the Python computer language. In addition to the provided complete applications, the architecture allows the user to write relatively simple plugins, which can accomplish a variety of computer vision tasks and be integrated within larger software systems. The software is available at http://code.astraw.com/projects/motmot.
Evaluation of H.264 and H.265 full motion video encoding for small UAS platforms
NASA Astrophysics Data System (ADS)
McGuinness, Christopher D.; Walker, David; Taylor, Clark; Hill, Kerry; Hoffman, Marc
2016-05-01
Of all the steps in the image acquisition and formation pipeline, compression is the only process that degrades image quality. A selected compression algorithm succeeds or fails to provide sufficient quality at the requested compression rate depending on how well the algorithm is suited to the input data. Applying an algorithm designed for one type of data to a different type often results in poor compression performance. This is mostly the case when comparing the performance of H.264, designed for standard definition data, to HEVC (High Efficiency Video Coding), which the Joint Collaborative Team on Video Coding (JCT-VC) designed for high-definition data. This study focuses on evaluating how HEVC compares to H.264 when compressing data from small UAS platforms. To compare the standards directly, we assess two open-source traditional software solutions: x264 and x265. These software-only comparisons allow us to establish a baseline of how much improvement can generally be expected of HEVC over H.264. Then, specific solutions leveraging different types of hardware are selected to understand the limitations of commercial-off-the-shelf (COTS) options. Algorithmically, regardless of the implementation, HEVC is found to provide similar quality video as H.264 at 40% lower data rates for video resolutions greater than 1280x720, roughly 1 Megapixel (MPx). For resolutions less than 1MPx, H.264 is an adequate solution though a small (roughly 20%) compression boost is earned by employing HEVC. New low cost, size, weight, and power (CSWAP) HEVC implementations are being developed and will be ideal for small UAS systems.
Applications of just-noticeable depth difference model in joint multiview video plus depth coding
NASA Astrophysics Data System (ADS)
Liu, Chao; An, Ping; Zuo, Yifan; Zhang, Zhaoyang
2014-10-01
A new multiview just-noticeable-depth-difference(MJNDD) Model is presented and applied to compress the joint multiview video plus depth. Many video coding algorithms remove spatial and temporal redundancies and statistical redundancies but they are not capable of removing the perceptual redundancies. Since the final receptor of video is the human eyes, we can remove the perception redundancy to gain higher compression efficiency according to the properties of human visual system (HVS). Traditional just-noticeable-distortion (JND) model in pixel domain contains luminance contrast and spatial-temporal masking effects, which describes the perception redundancy quantitatively. Whereas HVS is very sensitive to depth information, a new multiview-just-noticeable-depth-difference(MJNDD) model is proposed by combining traditional JND model with just-noticeable-depth-difference (JNDD) model. The texture video is divided into background and foreground areas using depth information. Then different JND threshold values are assigned to these two parts. Later the MJNDD model is utilized to encode the texture video on JMVC. When encoding the depth video, JNDD model is applied to remove the block artifacts and protect the edges. Then we use VSRS3.5 (View Synthesis Reference Software) to generate the intermediate views. Experimental results show that our model can endure more noise and the compression efficiency is improved by 25.29 percent at average and by 54.06 percent at most compared to JMVC while maintaining the subject quality. Hence it can gain high compress ratio and low bit rate.
47 CFR 76.640 - Support for unidirectional digital cable products on digital cable systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
... COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Technical... provided for Profiles 1, 2, and 3. (iii) ANSI/SCTE 54 2003 (formerly DVS 241): “Digital Video Service...
Lin, Yu-You; Chiang, Wen-Chu; Hsieh, Ming-Ju; Sun, Jen-Tang; Chang, Yi-Chung; Ma, Matthew Huei-Ming
2018-02-01
This study aimed to conduct a systematic review and meta-analysis comparing the effect of video-assistance and audio-assistance on quality of dispatcher-instructed cardiopulmonary resuscitation (DI-CPR) for bystanders. Five databases were searched, including PubMed, Cochrane library, Embase, Scopus and NIH clinical trial, to find randomized control trials published before June 2017. Qualitative analysis and meta-analysis were undertaken to examine the difference between the quality of video-instructed and audio-instructed dispatcher-instructed bystander CPR. The database search yielded 929 records, resulting in the inclusion of 9 relevant articles in this study. Of these, 6 were included in the meta-analysis. Initiation of chest compressions was slower in the video-instructed group than in the audio-instructed group (median delay 31.5 s; 95% CI: 10.94-52.09). The difference in the number of chest compressions per minute between the groups was 19.9 (95% CI: 10.50-29.38) with significantly faster compressions in the video-instructed group than in the audio-instructed group (104.8 vs. 80.6). The odds ratio (OR) for correct hand positioning was 0.8 (95% CI: 0.53-1.30) when comparing the audio-instructed and video-instructed groups. The differences in chest compression depth (mm) and time to first ventilation (seconds) between the video-instructed group and audio-instructed group were 1.6 mm (95% CI: -8.75, 5.55) and 7.5 s (95% CI: -56.84, 71.80), respectively. Video-instructed DI-CPR significantly improved the chest compression rate compared to the audio-instructed method, and a trend for correctness of hand position was also observed. However, this method caused a delay in the commencement of bystander-initiated CPR in the simulation setting. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Smith, Dean Lance
1993-01-01
Work continued on the design of two IBM PC/AT compatible computer interface boards. The boards will permit digital data to be transmitted over a composite video channel from the Orbiter. One board combines data with a composite video signal. The other board strips the data from the video signal.
Twelve tips for the production of digital chalk-talk videos.
Rana, Jasmine; Besche, Henrike; Cockrill, Barbara
2017-06-01
Increasingly over the past decade, faculty in medical and graduate schools have received requests from digital millennial learners for concise faculty-made educational videos. At our institution, over the past couple of years alone, several hundred educational videos have been created by faculty who teach in a flipped-classroom setting of the pre-clinical medical school curriculum. Despite the appeal and potential learning benefits of digital chalk-talk videos first popularized by Khan Academy, we have observed that the conceptual and technological barriers for creating chalk-talk videos can be high for faculty. To this end, this tips article offers an easy-to-follow 12-step conceptual framework to guide at-home production of chalk-talk educational videos.
Sub-component modeling for face image reconstruction in video communications
NASA Astrophysics Data System (ADS)
Shiell, Derek J.; Xiao, Jing; Katsaggelos, Aggelos K.
2008-08-01
Emerging communications trends point to streaming video as a new form of content delivery. These systems are implemented over wired systems, such as cable or ethernet, and wireless networks, cell phones, and portable game systems. These communications systems require sophisticated methods of compression and error-resilience encoding to enable communications across band-limited and noisy delivery channels. Additionally, the transmitted video data must be of high enough quality to ensure a satisfactory end-user experience. Traditionally, video compression makes use of temporal and spatial coherence to reduce the information required to represent an image. In many communications systems, the communications channel is characterized by a probabilistic model which describes the capacity or fidelity of the channel. The implication is that information is lost or distorted in the channel, and requires concealment on the receiving end. We demonstrate a generative model based transmission scheme to compress human face images in video, which has the advantages of a potentially higher compression ratio, while maintaining robustness to errors and data corruption. This is accomplished by training an offline face model and using the model to reconstruct face images on the receiving end. We propose a sub-component AAM modeling the appearance of sub-facial components individually, and show face reconstruction results under different types of video degradation using a weighted and non-weighted version of the sub-component AAM.
Communication system analysis for manned space flight
NASA Technical Reports Server (NTRS)
Schilling, D. L.
1978-01-01
The development of adaptive delta modulators capable of digitizing a video signal is summarized. The delta modulator encoder accepts a 4 MHz black and white composite video signal or a color video signal and encodes it into a stream of binary digits at a rate which can be adjusted from 8 Mb/s to 24 Mb/s. The output bit rate is determined by the user and alters the quality of the video picture. The digital signal is decoded using the adaptive delta modulator decoder to reconstruct the picture.
Cost-effective handling of digital medical images in the telemedicine environment.
Choong, Miew Keen; Logeswaran, Rajasvaran; Bister, Michel
2007-09-01
This paper concentrates on strategies for less costly handling of medical images. Aspects of digitization using conventional digital cameras, lossy compression with good diagnostic quality, and visualization through less costly monitors are discussed. For digitization of film-based media, subjective evaluation of the suitability of digital cameras as an alternative to the digitizer was undertaken. To save on storage, bandwidth and transmission time, the acceptable degree of compression with diagnostically no loss of important data was studied through randomized double-blind tests of the subjective image quality when compression noise was kept lower than the inherent noise. A diagnostic experiment was undertaken to evaluate normal low cost computer monitors as viable viewing displays for clinicians. The results show that conventional digital camera images of X-ray images were diagnostically similar to the expensive digitizer. Lossy compression, when used moderately with the imaging noise to compression noise ratio (ICR) greater than four, can bring about image improvement with better diagnostic quality than the original image. Statistical analysis shows that there is no diagnostic difference between expensive high quality monitors and conventional computer monitors. The results presented show good potential in implementing the proposed strategies to promote widespread cost-effective telemedicine and digital medical environments. 2006 Elsevier Ireland Ltd
Application of M-JPEG compression hardware to dynamic stimulus production.
Mulligan, J B
1997-01-01
Inexpensive circuit boards have appeared on the market which transform a normal micro-computer's disk drive into a video disk capable of playing extended video sequences in real time. This technology enables the performance of experiments which were previously impossible, or at least prohibitively expensive. The new technology achieves this capability using special-purpose hardware to compress and decompress individual video frames, enabling a video stream to be transferred over relatively low-bandwidth disk interfaces. This paper will describe the use of such devices for visual psychophysics and present the technical issues that must be considered when evaluating individual products.
Visual acuity, contrast sensitivity, and range performance with compressed motion video
NASA Astrophysics Data System (ADS)
Bijl, Piet; de Vries, Sjoerd C.
2010-10-01
Video of visual acuity (VA) and contrast sensitivity (CS) test charts in a complex background was recorded using a CCD color camera mounted on a computer-controlled tripod and was fed into real-time MPEG-2 compression/decompression equipment. The test charts were based on the triangle orientation discrimination (TOD) test method and contained triangle test patterns of different sizes and contrasts in four possible orientations. In a perception experiment, observers judged the orientation of the triangles in order to determine VA and CS thresholds at the 75% correct level. Three camera velocities (0, 1.0, and 2.0 deg/s, or 0, 4.1, and 8.1 pixels/frame) and four compression rates (no compression, 4 Mb/s, 2 Mb/s, and 1 Mb/s) were used. VA is shown to be rather robust to any combination of motion and compression. CS, however, dramatically decreases when motion is combined with high compression ratios. The measured thresholds were fed into the TOD target acquisition model to predict the effect of motion and compression on acquisition ranges for tactical military vehicles. The effect of compression on static performance is limited but strong with motion video. The data suggest that with the MPEG2 algorithm, the emphasis is on the preservation of image detail at the cost of contrast loss.
Transmission of digital images within the NTSC analog format
Nickel, George H.
2004-06-15
HDTV and NTSC compatible image communication is done in a single NTSC channel bandwidth. Luminance and chrominance image data of a scene to be transmitted is obtained. The image data is quantized and digitally encoded to form digital image data in HDTV transmission format having low-resolution terms and high-resolution terms. The low-resolution digital image data terms are transformed to a voltage signal corresponding to NTSC color subcarrier modulation with retrace blanking and color bursts to form a NTSC video signal. The NTSC video signal and the high-resolution digital image data terms are then transmitted in a composite NTSC video transmission. In a NTSC receiver, the NTSC video signal is processed directly to display the scene. In a HDTV receiver, the NTSC video signal is processed to invert the color subcarrier modulation to recover the low-resolution terms, where the recovered low-resolution terms are combined with the high-resolution terms to reconstruct the scene in a high definition format.
Projection displays and MEMS: timely convergence for a bright future
NASA Astrophysics Data System (ADS)
Hornbeck, Larry J.
1995-09-01
Projection displays and microelectromechanical systems (MEMS) have evolved independently, occasionally crossing paths as early as the 1950s. But the commercially viable use of MEMS for projection displays has been illusive until the recent invention of Texas Instruments Digital Light Processing TM (DLP) technology. DLP technology is based on the Digital Micromirror DeviceTM (DMD) microchip, a MEMS technology that is a semiconductor digital light switch that precisely controls a light source for projection display and hardcopy applications. DLP technology provides a unique business opportunity because of the timely convergence of market needs and technology advances. The world is rapidly moving to an all- digital communications and entertainment infrastructure. In the near future, most of the technologies necessary for this infrastrucutre will be available at the right performance and price levels. This will make commercially viable an all-digital chain (capture, compression, transmission, reception decompression, hearing, and viewing). Unfortunately, the digital images received today must be translated into analog signals for viewing on today's televisions. Digital video is the final link in the all-digital infrastructure and DLP technoogy provides that link. DLP technology is an enabler for digital, high-resolution, color projection displays that have high contrast, are bright, seamless, and have the accuracy of color and grayscale that can be achieved only by digital control. This paper contains an introduction to DMD and DLP technology, including the historical context from which to view their developemnt. The architecture, projection operation, and fabrication are presented. Finally, the paper includes an update about current DMD business opportunities in projection displays and hardcopy.
Dufour, J C; Cuggia, M; Soula, G; Spector, M; Kohler, F
2007-01-01
The aim of the French-speaking Virtual Medical University project (UMVF) is to share common resources and specific tools in order to improve medical training. Digital video on IP is an attractive tool for higher education but there are a number of obstacles to widespread implementation. This paper describes the UMVF approach to integrating digital video technologies and services in educational projects.
Innovative Solution to Video Enhancement
NASA Technical Reports Server (NTRS)
2001-01-01
Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.
Energy conservation using face detection
NASA Astrophysics Data System (ADS)
Deotale, Nilesh T.; Kalbande, Dhananjay R.; Mishra, Akassh A.
2011-10-01
Computerized Face Detection, is concerned with the difficult task of converting a video signal of a person to written text. It has several applications like face recognition, simultaneous multiple face processing, biometrics, security, video surveillance, human computer interface, image database management, digital cameras use face detection for autofocus, selecting regions of interest in photo slideshows that use a pan-and-scale and The Present Paper deals with energy conservation using face detection. Automating the process to a computer requires the use of various image processing techniques. There are various methods that can be used for Face Detection such as Contour tracking methods, Template matching, Controlled background, Model based, Motion based and color based. Basically, the video of the subject are converted into images are further selected manually for processing. However, several factors like poor illumination, movement of face, viewpoint-dependent Physical appearance, Acquisition geometry, Imaging conditions, Compression artifacts makes Face detection difficult. This paper reports an algorithm for conservation of energy using face detection for various devices. The present paper suggests Energy Conservation can be done by Detecting the Face and reducing the brightness of complete image and then adjusting the brightness of the particular area of an image where the face is located using histogram equalization.
Hybrid vision activities at NASA Johnson Space Center
NASA Technical Reports Server (NTRS)
Juday, Richard D.
1990-01-01
NASA's Johnson Space Center in Houston, Texas, is active in several aspects of hybrid image processing. (The term hybrid image processing refers to a system that combines digital and photonic processing). The major thrusts are autonomous space operations such as planetary landing, servicing, and rendezvous and docking. By processing images in non-Cartesian geometries to achieve shift invariance to canonical distortions, researchers use certain aspects of the human visual system for machine vision. That technology flow is bidirectional; researchers are investigating the possible utility of video-rate coordinate transformations for human low-vision patients. Man-in-the-loop teleoperations are also supported by the use of video-rate image-coordinate transformations, as researchers plan to use bandwidth compression tailored to the varying spatial acuity of the human operator. Technological elements being developed in the program include upgraded spatial light modulators, real-time coordinate transformations in video imagery, synthetic filters that robustly allow estimation of object pose parameters, convolutionally blurred filters that have continuously selectable invariance to such image changes as magnification and rotation, and optimization of optical correlation done with spatial light modulators that have limited range and couple both phase and amplitude in their response.
A Brazilian educational experiment: teleradiology on web TV.
Silva, Angélica Baptista; de Amorim, Annibal Coelho
2009-01-01
Since 2004, educational videoconferences have been held in Brazil for paediatric radiologists in training. The RUTE network has been used, a high-speed national research and education network. Twelve videoconferences were recorded by the Health Channel and transformed into TV programmes, both for conventional broadcast and for access via the Internet. Between October 2007 and December 2009 the Health Channel website registered 2378 hits. Our experience suggests that for successful recording of multipoint videoconferences, four areas are important: (1) a pre-planned script is required, for both physicians and film-makers; (2) particular care is necessary when editing the audiovisual material; (3) the audio and video equipment requires careful adjustment to preserve clinical discussions and the quality of radiology images; (4) to produce a product suitable for both TV sets and computer devices, the master tape needs to be encoded in low resolution digital video formats for Internet media (wmv and rm format for streaming, and compressed zip files for downloading) and MPEG format for DVDs.
NASA Astrophysics Data System (ADS)
Michel, Robert G.; Cavallari, Jennifer M.; Znamenskaia, Elena; Yang, Karl X.; Sun, Tao; Bent, Gary
1999-12-01
This article is an electronic publication in Spectrochimica Acta Electronica (SAE), a section of Spectrochimica Acta Part B (SAB). The hardcopy text is accompanied by an electronic archive, stored on the CD-ROM accompanying this issue. The archive contains video clips. The main article discusses the scientific aspects of the subject and explains the purpose of the video files. Short, 15-30 s, digital video clips are easily controllable at the computer keyboard, which gives a speaker the ability to show fine details through the use of slow motion. Also, they are easily accessed from the computer hard drive for rapid extemporaneous presentation. In addition, they are easily transferred to the Internet for dissemination. From a pedagogical point of view, the act of making a video clip by a student allows for development of powers of observation, while the availability of the technology to make digital video clips gives a teacher the flexibility to demonstrate scientific concepts that would otherwise have to be done as 'live' demonstrations, with all the likely attendant misadventures. Our experience with digital video clips has been through their use in computer-based presentations by undergraduate and graduate students in analytical chemistry classes, and by high school and middle school teachers and their students in a variety of science and non-science classes. In physics teaching laboratories, we have used the hardware to capture digital video clips of dynamic processes, such as projectiles and pendulums, for later mathematical analysis.
The wavelet/scalar quantization compression standard for digital fingerprint images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, J.N.; Brislawn, C.M.
1994-04-01
A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.
Scollato, A; Perrini, P; Benedetto, N; Di Lorenzo, N
2007-06-01
We propose an easy-to-construct digital video editing system ideal to produce video documentation and still images. A digital video editing system applicable to many video sources in the operating room is described in detail. The proposed system has proved easy to use and permits one to obtain videography quickly and easily. Mixing different streams of video input from all the devices in use in the operating room, the application of filters and effects produces a final, professional end-product. Recording on a DVD provides an inexpensive, portable and easy-to-use medium to store or re-edit or tape at a later time. From stored videography it is easy to extract high-quality, still images useful for teaching, presentations and publications. In conclusion digital videography and still photography can easily be recorded by the proposed system, producing high-quality video recording. The use of firewire ports provides good compatibility with next-generation hardware and software. The high standard of quality makes the proposed system one of the lowest priced products available today.
Video and LAN solutions for a digital OR: the Varese experience
NASA Astrophysics Data System (ADS)
Nocco, Umberto; Cocozza, Eugenio; Sivo, Monica; Peta, Giancarlo
2007-03-01
Purpose: build 20 ORs equipped with independent video acquisition and broadcasting systems and a powerful LAN connectivity. Methods: a digital PC controlled video matrix has been installed in each OR. The LAN connectivity has been developed to grant data entering the OR and high speed connectivity to a server and to broadcasting devices. Video signals are broadcasted within the OR. Fixed inputs and five additional video inputs have been placed in the OR. Images can be stored locally on a high capacity HDD and a DVD recorder. Images can be also stored in a central archive for future acquisition and reference. Ethernet plugs have been placed within the OR to acquire images and data from the Hospital LAN; the OR is connected to the server/archive using a dedicated optical fiber. Results: 20 independent digital ORs have been built. Each OR is "self contained" and images can be digitally managed and broadcasted. Security issues concerning both image visualization and electrical safety have been fulfilled and each OR is fully integrated in the Hospital LAN. Conclusions: Digital ORs were fully implemented, they fulfill surgeons needs in terms of video acquisition and distribution and grant high quality video for each kind of surgery in a major hospital.
Storing Data and Video on One Tape
NASA Technical Reports Server (NTRS)
Nixon, J. H.; Cater, J. P.
1985-01-01
Microprocessor-based system originally developed for anthropometric research merges digital data with video images for storage on video cassette recorder. Combined signals later retrieved and displayed simultaneously on television monitor. System also extracts digital portion of stored information and transfers it to solid-state memory.
This Rock 'n' Roll Video Teaches Math
ERIC Educational Resources Information Center
Niess, Margaret L.; Walker, Janet M.
2009-01-01
Mathematics is a discipline that has significantly advanced through the use of digital technologies with improved computational, graphical, and symbolic capabilities. Digital videos can be used to present challenging mathematical questions for students. Video clips offer instructional possibilities for moving students from a passive mode of…
ERIC Educational Resources Information Center
Loftus, Maria; Tiernan, Peter; Cherian, Sebastian
2014-01-01
Evidence has shown that students have greatly increased their consumption of digital video, principally through video sharing sites. In parallel, students' participation in video sharing and creation has also risen. As educators, we need to question how this can be effectively translated into a positive learning experience for students, whilst…
ERIC Educational Resources Information Center
Davidson, Hall
2004-01-01
In practice, videomaking in the classroom takes dedication, inspiration, and plenty of extra time. Despite these difficulties, teachers have always made great videos. Perhaps this is due to a combination of the tremendous appeal of video, the deep satisfaction of seeing stellar projects on the television set, and the knowledge that work can be…
Low cost voice compression for mobile digital radios
NASA Technical Reports Server (NTRS)
Omura, J. K.
1985-01-01
A new technique for low cost rubust voice compression at 4800 bits per second was studied. The approach was based on using a cascade of digital biquad adaptive filters with simplified multipulse excitation followed by simple bit sequence compression.
A recursive technique for adaptive vector quantization
NASA Technical Reports Server (NTRS)
Lindsay, Robert A.
1989-01-01
Vector Quantization (VQ) is fast becoming an accepted, if not preferred method for image compression. The VQ performs well when compressing all types of imagery including Video, Electro-Optical (EO), Infrared (IR), Synthetic Aperture Radar (SAR), Multi-Spectral (MS), and digital map data. The only requirement is to change the codebook to switch the compressor from one image sensor to another. There are several approaches for designing codebooks for a vector quantizer. Adaptive Vector Quantization is a procedure that simultaneously designs codebooks as the data is being encoded or quantized. This is done by computing the centroid as a recursive moving average where the centroids move after every vector is encoded. When computing the centroid of a fixed set of vectors the resultant centroid is identical to the previous centroid calculation. This method of centroid calculation can be easily combined with VQ encoding techniques. The defined quantizer changes after every encoded vector by recursively updating the centroid of minimum distance which is the selected by the encoder. Since the quantizer is changing definition or states after every encoded vector, the decoder must now receive updates to the codebook. This is done as side information by multiplexing bits into the compressed source data.
Educational Video Recording and Editing for The Hand Surgeon
Rehim, Shady A.; Chung, Kevin C.
2016-01-01
Digital video recordings are increasingly used across various medical and surgical disciplines including hand surgery for documentation of patient care, resident education, scientific presentations and publications. In recent years, the introduction of sophisticated computer hardware and software technology has simplified the process of digital video production and improved means of disseminating large digital data files. However, the creation of high quality surgical video footage requires basic understanding of key technical considerations, together with creativity and sound aesthetic judgment of the videographer. In this article we outline the practical steps involved with equipment preparation, video recording, editing and archiving as well as guidance for the choice of suitable hardware and software equipment. PMID:25911212
Lossless Video Sequence Compression Using Adaptive Prediction
NASA Technical Reports Server (NTRS)
Li, Ying; Sayood, Khalid
2007-01-01
We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies.
Videos and images from 25 years of teaching compressible flow
NASA Astrophysics Data System (ADS)
Settles, Gary
2008-11-01
Compressible flow is a very visual topic due to refractive optical flow visualization and the public fascination with high-speed flight. Films, video clips, and many images are available to convey this in the classroom. An overview of this material is given and selected examples are shown, drawn from educational films, the movies, television, etc., and accumulated over 25 years of teaching basic and advanced compressible-flow courses. The impact of copyright protection and the doctrine of fair use is also discussed.
Adaptive temporal compressive sensing for video with motion estimation
NASA Astrophysics Data System (ADS)
Wang, Yeru; Tang, Chaoying; Chen, Yueting; Feng, Huajun; Xu, Zhihai; Li, Qi
2018-04-01
In this paper, we present an adaptive reconstruction method for temporal compressive imaging with pixel-wise exposure. The motion of objects is first estimated from interpolated images with a designed coding mask. With the help of motion estimation, image blocks are classified according to the degree of motion and reconstructed with the corresponding dictionary, which was trained beforehand. Both the simulation and experiment results show that the proposed method can obtain accurate motion information before reconstruction and efficiently reconstruct compressive video.
Subjective evaluation of mobile 3D video content: depth range versus compression artifacts
NASA Astrophysics Data System (ADS)
Jumisko-Pyykkö, Satu; Haustola, Tomi; Boev, Atanas; Gotchev, Atanas
2011-02-01
Mobile 3D television is a new form of media experience, which combines the freedom of mobility with the greater realism of presenting visual scenes in 3D. Achieving this combination is a challenging task as greater viewing experience has to be achieved with the limited resources of the mobile delivery channel such as limited bandwidth and power constrained handheld player. This challenge sets need for tight optimization of the overall mobile 3DTV system. Presence of depth and compression artifacts in the played 3D video are two major factors that influence viewer's subjective quality of experience and satisfaction. The primary goal of this study has been to examine the influence of varying depth and compression artifacts on the subjective quality of experience for mobile 3D video content. In addition, the influence of the studied variables on simulator sickness symptoms has been studied and vocabulary-based descriptive quality of experience has been conducted for a sub-set of variables in order to understand the perceptual characteristics in detail. In the experiment, 30 participants have evaluated the overall quality of different 3D video contents with varying depth ranges and compressed with varying quantization parameters. The test video content has been presented on a portable autostereoscopic LCD display with horizontal double density pixel arrangement. The results of the psychometric study indicate that compression artifacts are a dominant factor determining the quality of experience compared to varying depth range. More specifically, contents with strong compression has been rejected by the viewers and deemed unacceptable. The results of descriptive study confirm the dominance of visible spatial artifacts along the added value of depth for artifact-free content. The level of visual discomfort has been determined as not offending.
Thermal-Polarimetric and Visible Data Collection for Face Recognition
2016-09-01
pixels • Spectral range: 7.5–13 μm • Analog image output: NTSC analog video • Digital image output: Firewire radiometric, 14-bit digital video to...PC The analog video was not used for this study. The radiometric, 14-bit digital data provided temperature measurement information for comparison...distribution unlimited. 18 9. References 1. Choi J, Hu S, Young SS, Davis LS. Thermal to visible face recognition. Proc. SPIE 8371, Sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, C.S.; af Ekenstam, G.; Sallstrom, M.
1995-07-01
The Swedish Nuclear Power Inspectorate (SKI) and the US Department of Energy (DOE) sponsored work on a Remote Monitoring System (RMS) that was installed in August 1994 at the Barseback Works north of Malmo, Sweden. The RMS was designed to test the front end detection concept that would be used for unattended remote monitoring activities. Front end detection reduces the number of video images recorded and provides additional sensor verification of facility operations. The function of any safeguards Containment and Surveillance (C/S) system is to collect information which primarily is images that verify the operations at a nuclear facility. Barsebackmore » is ideal to test the concept of front end detection since most activities of safeguards interest is movement of spent fuel which occurs once a year. The RMS at Barseback uses a network of nodes to collect data from microwave motion detectors placed to detect the entrance and exit of spent fuel casks through a hatch. A video system using digital compression collects digital images and stores them on a hard drive and a digital optical disk. Data and images from the storage area are remotely monitored via telephone from Stockholm, Sweden and Albuquerque, NM, USA. These remote monitoring stations operated by SKI and SNL respectively, can retrieve data and images from the RMS computer at the Barseback Facility. The data and images are encrypted before transmission. This paper presents details of the RMS and test results of this approach to front end detection of safeguard activities.« less
47 CFR 76.1909 - Redistribution control of unencrypted digital terrestrial broadcast content.
Code of Federal Regulations, 2011 CFR
2011-10-01
... content. Where a multichannel video programming distributor retransmits unencrypted digital terrestrial... 47 Telecommunication 4 2011-10-01 2011-10-01 false Redistribution control of unencrypted digital... (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Encoding Rules § 76.1909...
Tiny videos: a large data set for nonparametric video retrieval and frame classification.
Karpenko, Alexandre; Aarabi, Parham
2011-03-01
In this paper, we present a large database of over 50,000 user-labeled videos collected from YouTube. We develop a compact representation called "tiny videos" that achieves high video compression rates while retaining the overall visual appearance of the video as it varies over time. We show that frame sampling using affinity propagation-an exemplar-based clustering algorithm-achieves the best trade-off between compression and video recall. We use this large collection of user-labeled videos in conjunction with simple data mining techniques to perform related video retrieval, as well as classification of images and video frames. The classification results achieved by tiny videos are compared with the tiny images framework [24] for a variety of recognition tasks. The tiny images data set consists of 80 million images collected from the Internet. These are the largest labeled research data sets of videos and images available to date. We show that tiny videos are better suited for classifying scenery and sports activities, while tiny images perform better at recognizing objects. Furthermore, we demonstrate that combining the tiny images and tiny videos data sets improves classification precision in a wider range of categories.
On the Complexity of Digital Video Cameras in/as Research: Perspectives and Agencements
ERIC Educational Resources Information Center
Bangou, Francis
2014-01-01
The goal of this article is to consider the potential for digital video cameras to produce as part of a research agencement. Our reflection will be guided by the current literature on the use of video recordings in research, as well as by the rhizoanalysis of two vignettes. The first of these vignettes is associated with a short video clip shot by…
ERIC Educational Resources Information Center
Thompson, Ella Belzberg
2014-01-01
In 1999, it was necessary to build an interface for the Shoah Foundation's Visual History Archive (the world's largest digital video archive at the time) that constituted over 120,000 hours of video of over 52,000 video testimonies of Holocaust survivors, rescuers and witnesses. In order to build this educational research interface, an…
Low-latency situational awareness for UxV platforms
NASA Astrophysics Data System (ADS)
Berends, David C.
2012-06-01
Providing high quality, low latency video from unmanned vehicles through bandwidth-limited communications channels remains a formidable challenge for modern vision system designers. SRI has developed a number of enabling technologies to address this, including the use of SWaP-optimized Systems-on-a-Chip which provide Multispectral Fusion and Contrast Enhancement as well as H.264 video compression. Further, the use of salience-based image prefiltering prior to image compression greatly reduces output video bandwidth by selectively blurring non-important scene regions. Combined with our customization of the VLC open source video viewer for low latency video decoding, SRI developed a prototype high performance, high quality vision system for UxV application in support of very demanding system latency requirements and user CONOPS.
50 CFR 216.155 - Requirements for monitoring and reporting.
Code of Federal Regulations, 2010 CFR
2010-10-01
... place 3 autonomous digital video cameras overlooking chosen haul-out sites located varying distances from the missile launch site. Each video camera will be set to record a focal subgroup within the... presence and activity will be conducted and recorded in a field logbook or recorded on digital video for...
Magnetic Braking: A Video Analysis
ERIC Educational Resources Information Center
Molina-Bolivar, J. A.; Abella-Palacios, A. J.
2012-01-01
This paper presents a laboratory exercise that introduces students to the use of video analysis software and the Lenz's law demonstration. Digital techniques have proved to be very useful for the understanding of physical concepts. In particular, the availability of affordable digital video offers students the opportunity to actively engage in…
MPEG-1 low-cost encoder solution
NASA Astrophysics Data System (ADS)
Grueger, Klaus; Schirrmeister, Frank; Filor, Lutz; von Reventlow, Christian; Schneider, Ulrich; Mueller, Gerriet; Sefzik, Nicolai; Fiedrich, Sven
1995-02-01
A solution for real-time compression of digital YCRCB video data to an MPEG-1 video data stream has been developed. As an additional option, motion JPEG and video telephone streams (H.261) can be generated. For MPEG-1, up to two bidirectional predicted images are supported. The required computational power for motion estimation and DCT/IDCT, memory size and memory bandwidth have been the main challenges. The design uses fast-page-mode memory accesses and requires only one single 80 ns EDO-DRAM with 256 X 16 organization for video encoding. This can be achieved only by using adequate access and coding strategies. The architecture consists of an input processing and filter unit, a memory interface, a motion estimation unit, a motion compensation unit, a DCT unit, a quantization control, a VLC unit and a bus interface. For using the available memory bandwidth by the processing tasks, a fixed schedule for memory accesses has been applied, that can be interrupted for asynchronous events. The motion estimation unit implements a highly sophisticated hierarchical search strategy based on block matching. The DCT unit uses a separated fast-DCT flowgraph realized by a switchable hardware unit for both DCT and IDCT operation. By appropriate multiplexing, only one multiplier is required for: DCT, quantization, inverse quantization, and IDCT. The VLC unit generates the video-stream up to the video sequence layer and is directly coupled with an intelligent bus-interface. Thus, the assembly of video, audio and system data can easily be performed by the host computer. Having a relatively low complexity and only small requirements for DRAM circuits, the developed solution can be applied to low-cost encoding products for consumer electronics.
Three dimensional range geometry and texture data compression with space-filling curves.
Chen, Xia; Zhang, Song
2017-10-16
This paper presents a novel method to effectively store three-dimensional (3D) data and 2D texture data into a regular 24-bit image. The proposed method uses the Hilbert space-filling curve to map the normalized unwrapped phase map to two 8-bit color channels, and saves the third color channel for 2D texture storage. By further leveraging existing 2D image and video compression techniques, the proposed method can achieve high compression ratios while effectively preserving data quality. Since the encoding and decoding processes can be applied to most of the current 2D media platforms, this proposed compression method can make 3D data storage and transmission available for many electrical devices without requiring special hardware changes. Experiments demonstrate that if a lossless 2D image/video format is used, both original 3D geometry and 2D color texture can be accurately recovered; if lossy image/video compression is used, only black-and-white or grayscale texture can be properly recovered, but much higher compression ratios (e.g., 1543:1 against the ASCII OBJ format) are achieved with slight loss of 3D geometry quality.
47 CFR 76.1630 - MVPD digital television transition notices.
Code of Federal Regulations, 2012 CFR
2012-10-01
... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Notices § 76.1630 MVPD digital television transition notices. (a) Multichannel video programming distributors (MVPDs) shall provide subscribers with...
47 CFR 76.1630 - MVPD digital television transition notices.
Code of Federal Regulations, 2013 CFR
2013-10-01
... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Notices § 76.1630 MVPD digital television transition notices. (a) Multichannel video programming distributors (MVPDs) shall provide subscribers with...
47 CFR 76.1630 - MVPD digital television transition notices.
Code of Federal Regulations, 2014 CFR
2014-10-01
... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Notices § 76.1630 MVPD digital television transition notices. (a) Multichannel video programming distributors (MVPDs) shall provide subscribers with...
47 CFR 76.1630 - MVPD digital television transition notices.
Code of Federal Regulations, 2011 CFR
2011-10-01
... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Notices § 76.1630 MVPD digital television transition notices. (a) Multichannel video programming distributors (MVPDs) shall provide subscribers with...
47 CFR 76.1630 - MVPD digital television transition notices.
Code of Federal Regulations, 2010 CFR
2010-10-01
... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Notices § 76.1630 MVPD digital television transition notices. (a) Multichannel video programming distributors (MVPDs) shall provide subscribers with...
Introductory tail-flick of the Jacky dragon visual display: signal efficacy depends upon duration.
Peters, Richard A; Evans, Christopher S
2003-12-01
Many animal signals have introductory components that alert receivers. Examples from the acoustic and visual domains show that this effect is often achieved with high intensity, a simple structure and a short duration. Quantitative analyses of the Jacky dragon Amphibolurus muricatus visual display reveal a different design: the introductory tail-flick has a lower velocity than subsequent components of the signal, but a longer duration. Here, using a series of video playback experiments with a digitally animated tail, we identify the properties responsible for signal efficacy. We began by validating the use of the computer-generated tail, comparing the responses to digital video footage of a lizard tail-flick with those to a precisely matched 3-D animation (Experiment 1). We then examined the effects of variation in stimulus speed, acceleration, duration and period by expanding and compressing the time scale of the sequence (Experiment 2). The results identified several variables that might mediate recognition. Two follow-up studies assessed the importance of tail-flick amplitude (Experiment 3), movement speed and signal duration (Experiment 4). Lizard responses to this array of stimuli reveal that duration is the most important characteristic of the tail-flick, and that intermittent signalling has the same effect as continuous movement. We suggest that signal design may reflect a trade-off between efficacy and cost.
CUQI: cardiac ultrasound video quality index
Razaak, Manzoor; Martini, Maria G.
2016-01-01
Abstract. Medical images and videos are now increasingly part of modern telecommunication applications, including telemedicinal applications, favored by advancements in video compression and communication technologies. Medical video quality evaluation is essential for modern applications since compression and transmission processes often compromise the video quality. Several state-of-the-art video quality metrics used for quality evaluation assess the perceptual quality of the video. For a medical video, assessing quality in terms of “diagnostic” value rather than “perceptual” quality is more important. We present a diagnostic-quality–oriented video quality metric for quality evaluation of cardiac ultrasound videos. Cardiac ultrasound videos are characterized by rapid repetitive cardiac motions and distinct structural information characteristics that are explored by the proposed metric. Cardiac ultrasound video quality index, the proposed metric, is a full reference metric and uses the motion and edge information of the cardiac ultrasound video to evaluate the video quality. The metric was evaluated for its performance in approximating the quality of cardiac ultrasound videos by testing its correlation with the subjective scores of medical experts. The results of our tests showed that the metric has high correlation with medical expert opinions and in several cases outperforms the state-of-the-art video quality metrics considered in our tests. PMID:27014715
Embodied Memory and Curatorship in Children's Digital Video Production
ERIC Educational Resources Information Center
Potter, John
2010-01-01
Digital video production in schools is often theorised, researched and written about in two ways: either as a part of media studies practice or as a technological innovation, bringing new, "creative", digital tools into the curriculum. Using frameworks for analysis derived from multimodality theory, new literacy studies and theories of…
Ethnography 2.0: Writing with Digital Video
ERIC Educational Resources Information Center
White, M. L.
2009-01-01
This article investigates how digital video technology can be used in ethnographic research and considers the implications of digital production, presentation and dissemination of ethnographic educational research knowledge. In this article, I introduce the term Ethnography 2.0 and address some of the issues that emerged from my decision to use…
The FBI compression standard for digitized fingerprint images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.
1996-10-01
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the currentmore » status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.« less
FBI compression standard for digitized fingerprint images
NASA Astrophysics Data System (ADS)
Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas
1996-11-01
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.
Chest compression rate measurement from smartphone video.
Engan, Kjersti; Hinna, Thomas; Ryen, Tom; Birkenes, Tonje S; Myklebust, Helge
2016-08-11
Out-of-hospital cardiac arrest is a life threatening situation where the first person performing cardiopulmonary resuscitation (CPR) most often is a bystander without medical training. Some existing smartphone apps can call the emergency number and provide for example global positioning system (GPS) location like Hjelp 113-GPS App by the Norwegian air ambulance. We propose to extend functionality of such apps by using the built in camera in a smartphone to capture video of the CPR performed, primarily to estimate the duration and rate of the chest compression executed, if any. All calculations are done in real time, and both the caller and the dispatcher will receive the compression rate feedback when detected. The proposed algorithm is based on finding a dynamic region of interest in the video frames, and thereafter evaluating the power spectral density by computing the fast fourier transform over sliding windows. The power of the dominating frequencies is compared to the power of the frequency area of interest. The system is tested on different persons, male and female, in different scenarios addressing target compression rates, background disturbances, compression with mouth-to-mouth ventilation, various background illuminations and phone placements. All tests were done on a recording Laerdal manikin, providing true compression rates for comparison. Overall, the algorithm is seen to be promising, and it manages a number of disturbances and light situations. For target rates at 110 cpm, as recommended during CPR, the mean error in compression rate (Standard dev. over tests in parentheses) is 3.6 (0.8) for short hair bystanders, and 8.7 (6.0) including medium and long haired bystanders. The presented method shows that it is feasible to detect the compression rate of chest compressions performed by a bystander by placing the smartphone close to the patient, and using the built-in camera combined with a video processing algorithm performed real-time on the device.
47 CFR 76.602 - Incorporation by reference.
Code of Federal Regulations, 2012 CFR
2012-10-01
... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Technical Standards § 76.602 Incorporation by reference. (a... System,” 2003, IBR approved for § 76.640. (4) ANSI/SCTE 54 2003 (formerly DVS 241), “Digital Video... Protection System,” 2003, IBR approved for § 76.640. (5) ANSI/SCTE 54 2003 (formerly DVS 241), “Digital Video...
ESL and Digital Video Integration: Case Studies
ERIC Educational Resources Information Center
Li, J., Ed.; Gromik, N., Ed.; Edwards, N., Ed.
2013-01-01
It should come as no surprise that digital video technology is of particular interest to English language learners; students are drawn to its visual appeal and vibrant creative potential. The seven original case studies in this book demonstrate how video can be an effective and powerful tool to create fluid, fun, interactive, and collaborative…
Content Area Vocabulary Videos in Multiple Contexts: A Pedagogical Tool
ERIC Educational Resources Information Center
Webb, C. Lorraine; Kapavik, Robin Robinson
2015-01-01
The authors challenged pre-service teachers to digitally define a social studies or mathematical vocabulary term in multiple contexts using a digital video camera. The researchers sought to answer the following questions: 1. How will creating a video for instruction affect pre-service teachers' attitudes about teaching with technology, if at all?…
Digital television system design study
NASA Technical Reports Server (NTRS)
Huth, G. K.
1976-01-01
The use of digital techniques for transmission of pictorial data is discussed for multi-frame images (television). Video signals are processed in a manner which includes quantization and coding such that they are separable from the noise introduced into the channel. The performance of digital television systems is determined by the nature of the processing techniques (i.e., whether the video signal itself or, instead, something related to the video signal is quantized and coded) and to the quantization and coding schemes employed.
A new DWT/MC/DPCM video compression framework based on EBCOT
NASA Astrophysics Data System (ADS)
Mei, L. M.; Wu, H. R.; Tan, D. M.
2005-07-01
A novel Discrete Wavelet Transform (DWT)/Motion Compensation (MC)/Differential Pulse Code Modulation (DPCM) video compression framework is proposed in this paper. Although the Discrete Cosine Transform (DCT)/MC/DPCM is the mainstream framework for video coders in industry and international standards, the idea of DWT/MC/DPCM has existed for more than one decade in the literature and the investigation is still undergoing. The contribution of this work is twofold. Firstly, the Embedded Block Coding with Optimal Truncation (EBCOT) is used here as the compression engine for both intra- and inter-frame coding, which provides good compression ratio and embedded rate-distortion (R-D) optimization mechanism. This is an extension of the EBCOT application from still images to videos. Secondly, this framework offers a good interface for the Perceptual Distortion Measure (PDM) based on the Human Visual System (HVS) where the Mean Squared Error (MSE) can be easily replaced with the PDM in the R-D optimization. Some of the preliminary results are reported here. They are also compared with benchmarks such as MPEG-2 and MPEG-4 version 2. The results demonstrate that under specified condition the proposed coder outperforms the benchmarks in terms of rate vs. distortion.
Predefined Redundant Dictionary for Effective Depth Maps Representation
NASA Astrophysics Data System (ADS)
Sebai, Dorsaf; Chaieb, Faten; Ghorbel, Faouzi
2016-01-01
The multi-view video plus depth (MVD) video format consists of two components: texture and depth map, where a combination of these components enables a receiver to generate arbitrary virtual views. However, MVD presents a very voluminous video format that requires a compression process for storage and especially for transmission. Conventional codecs are perfectly efficient for texture images compression but not for intrinsic depth maps properties. Depth images indeed are characterized by areas of smoothly varying grey levels separated by sharp discontinuities at the position of object boundaries. Preserving these characteristics is important to enable high quality view synthesis at the receiver side. In this paper, sparse representation of depth maps is discussed. It is shown that a significant gain in sparsity is achieved when particular mixed dictionaries are used for approximating these types of images with greedy selection strategies. Experiments are conducted to confirm the effectiveness at producing sparse representations, and competitiveness, with respect to candidate state-of-art dictionaries. Finally, the resulting method is shown to be effective for depth maps compression and represents an advantage over the ongoing 3D high efficiency video coding compression standard, particularly at medium and high bitrates.
NASA Astrophysics Data System (ADS)
Ciaramello, Frank M.; Hemami, Sheila S.
2009-02-01
Communication of American Sign Language (ASL) over mobile phones would be very beneficial to the Deaf community. ASL video encoded to achieve the rates provided by current cellular networks must be heavily compressed and appropriate assessment techniques are required to analyze the intelligibility of the compressed video. As an extension to a purely spatial measure of intelligibility, this paper quantifies the effect of temporal compression artifacts on sign language intelligibility. These artifacts can be the result of motion-compensation errors that distract the observer or frame rate reductions. They reduce the the perception of smooth motion and disrupt the temporal coherence of the video. Motion-compensation errors that affect temporal coherence are identified by measuring the block-level correlation between co-located macroblocks in adjacent frames. The impact of frame rate reductions was quantified through experimental testing. A subjective study was performed in which fluent ASL participants rated the intelligibility of sequences encoded at a range of 5 different frame rates and with 3 different levels of distortion. The subjective data is used to parameterize an objective intelligibility measure which is highly correlated with subjective ratings at multiple frame rates.
Innovative hyperchaotic encryption algorithm for compressed video
NASA Astrophysics Data System (ADS)
Yuan, Chun; Zhong, Yuzhuo; Yang, Shiqiang
2002-12-01
It is accepted that stream cryptosystem can achieve good real-time performance and flexibility which implements encryption by selecting few parts of the block data and header information of the compressed video stream. Chaotic random number generator, for example Logistics Map, is a comparatively promising substitute, but it is easily attacked by nonlinear dynamic forecasting and geometric information extracting. In this paper, we present a hyperchaotic cryptography scheme to encrypt the compressed video, which integrates Logistics Map with Z(232 - 1) field linear congruential algorithm to strengthen the security of the mono-chaotic cryptography, meanwhile, the real-time performance and flexibility of the chaotic sequence cryptography are maintained. It also integrates with the dissymmetrical public-key cryptography and implements encryption and identity authentification on control parameters at initialization phase. In accord with the importance of data in compressed video stream, encryption is performed in layered scheme. In the innovative hyperchaotic cryptography, the value and the updating frequency of control parameters can be changed online to satisfy the requirement of the network quality, processor capability and security requirement. The innovative hyperchaotic cryprography proves robust security by cryptoanalysis, shows good real-time performance and flexible implement capability through the arithmetic evaluating and test.
The AAPM/RSNA physics tutorial for residents: digital fluoroscopy.
Pooley, R A; McKinney, J M; Miller, D A
2001-01-01
A digital fluoroscopy system is most commonly configured as a conventional fluoroscopy system (tube, table, image intensifier, video system) in which the analog video signal is converted to and stored as digital data. Other methods of acquiring the digital data (eg, digital or charge-coupled device video and flat-panel detectors) will become more prevalent in the future. Fundamental concepts related to digital imaging in general include binary numbers, pixels, and gray levels. Digital image data allow the convenient use of several image processing techniques including last image hold, gray-scale processing, temporal frame averaging, and edge enhancement. Real-time subtraction of digital fluoroscopic images after injection of contrast material has led to widespread use of digital subtraction angiography (DSA). Additional image processing techniques used with DSA include road mapping, image fade, mask pixel shift, frame summation, and vessel size measurement. Peripheral angiography performed with an automatic moving table allows imaging of the peripheral vasculature with a single contrast material injection.
High bit depth infrared image compression via low bit depth codecs
NASA Astrophysics Data System (ADS)
Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren
2017-08-01
Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.
A model for a PC-based, universal-format, multimedia digitization system: moving beyond the scanner.
McEachen, James C; Cusack, Thomas J; McEachen, John C
2003-08-01
Digitizing images for use in case presentations based on hardcopy films, slides, photographs, negatives, books, and videos can present a challenging task. Scanners and digital cameras have become standard tools of the trade. Unfortunately, use of these devices to digitize multiple images in many different media formats can be a time-consuming and in some cases unachievable process. The authors' goal was to create a PC-based solution for digitizing multiple media formats in a timely fashion while maintaining adequate image presentation quality. The authors' PC-based solution makes use of off-the-shelf hardware applications to include a digital document camera (DDC), VHS video player, and video-editing kit. With the assistance of five staff radiologists, the authors examined the quality of multiple image types digitized with this equipment. The authors also quantified the speed of digitization of various types of media using the DDC and video-editing kit. With regard to image quality, the five staff radiologists rated the digitized angiography, CT, and MR images as adequate to excellent for use in teaching files and case presentations. With regard to digitized plain films, the average rating was adequate. As for performance, the authors recognized a 68% improvement in the time required to digitize hardcopy films using the DDC instead of a professional quality scanner. The PC-based solution provides a means for digitizing multiple images from many different types of media in a timely fashion while maintaining adequate image presentation quality.
Overview of the H.264/AVC video coding standard
NASA Astrophysics Data System (ADS)
Luthra, Ajay; Topiwala, Pankaj N.
2003-11-01
H.264/MPEG-4 AVC is the latest coding standard jointly developed by the Video Coding Experts Group (VCEG) of ITU-T and Moving Picture Experts Group (MPEG) of ISO/IEC. It uses state of the art coding tools and provides enhanced coding efficiency for a wide range of applications including video telephony, video conferencing, TV, storage (DVD and/or hard disk based), streaming video, digital video creation, digital cinema and others. In this paper an overview of this standard is provided. Some comparisons with the existing standards, MPEG-2 and MPEG-4 Part 2, are also provided.
An ultra-low-power image compressor for capsule endoscope.
Lin, Meng-Chun; Dung, Lan-Rong; Weng, Ping-Kuo
2006-02-25
Gastrointestinal (GI) endoscopy has been popularly applied for the diagnosis of diseases of the alimentary canal including Crohn's Disease, Celiac disease and other malabsorption disorders, benign and malignant tumors of the small intestine, vascular disorders and medication related small bowel injury. The wireless capsule endoscope has been successfully utilized to diagnose diseases of the small intestine and alleviate the discomfort and pain of patients. However, the resolution of demosaicked image is still low, and some interesting spots may be unintentionally omitted. Especially, the images will be severely distorted when physicians zoom images in for detailed diagnosis. Increasing resolution may cause significant power consumption in RF transmitter; hence, image compression is necessary for saving the power dissipation of RF transmitter. To overcome this drawback, we have been developing a new capsule endoscope, called GICam. We developed an ultra-low-power image compression processor for capsule endoscope or swallowable imaging capsules. In applications of capsule endoscopy, it is imperative to consider battery life/performance trade-offs. Applying state-of-the-art video compression techniques may significantly reduce the image bit rate by their high compression ratio, but they all require intensive computation and consume much battery power. There are many fast compression algorithms for reducing computation load; however, they may result in distortion of the original image, which is not good for use in the medical care. Thus, this paper will first simplify traditional video compression algorithms and propose a scalable compression architecture. As the result, the developed video compressor only costs 31 K gates at 2 frames per second, consumes 14.92 mW, and reduces the video size by 75% at least.
Magnetic Braking: A Video Analysis
NASA Astrophysics Data System (ADS)
Molina-Bolívar, J. A.; Abella-Palacios, A. J.
2012-10-01
This paper presents a laboratory exercise that introduces students to the use of video analysis software and the Lenz's law demonstration. Digital techniques have proved to be very useful for the understanding of physical concepts. In particular, the availability of affordable digital video offers students the opportunity to actively engage in kinematics in introductory-level physics.1,2 By using digital videos frame advance features and "marking" the position of a moving object in each frame, students are able to more precisely determine the position of an object at much smaller time increments than would be possible with common time devices. Once the student collects data consisting of positions and times, these values may be manipulated to determine velocity and acceleration. There are a variety of commercial and free applications that can be used for video analysis. Because the relevant technology has become inexpensive, video analysis has become a prevalent tool in introductory physics courses.
The compressed average image intensity metric for stereoscopic video quality assessment
NASA Astrophysics Data System (ADS)
Wilczewski, Grzegorz
2016-09-01
The following article depicts insights towards design, creation and testing of a genuine metric designed for a 3DTV video quality evaluation. The Compressed Average Image Intensity (CAII) mechanism is based upon stereoscopic video content analysis, setting its core feature and functionality to serve as a versatile tool for an effective 3DTV service quality assessment. Being an objective type of quality metric it may be utilized as a reliable source of information about the actual performance of a given 3DTV system, under strict providers evaluation. Concerning testing and the overall performance analysis of the CAII metric, the following paper presents comprehensive study of results gathered across several testing routines among selected set of samples of stereoscopic video content. As a result, the designed method for stereoscopic video quality evaluation is investigated across the range of synthetic visual impairments injected into the original video stream.
Digital Video for Fostering Self-Reflection in an ePortfolio Environment
ERIC Educational Resources Information Center
Cheng, Gary; Chau, Juliana
2009-01-01
The ability to self-reflect is widely recognized as a desirable learner attribute that can induce deep learning. Advances in computer-mediated communication technologies have led to intense interest in higher education in exploring the potential of digital tools, particularly digital video, for fostering self-reflection. While there are reports…
Digital image processing of bone - Problems and potentials
NASA Technical Reports Server (NTRS)
Morey, E. R.; Wronski, T. J.
1980-01-01
The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.
Use of Digital Videos in New Zealand Science Classrooms: Opportunities for Teachers and Students
ERIC Educational Resources Information Center
Chen, Junjun; Cowie, Bronwen
2016-01-01
This paper reports how New Zealand teachers used digital videos from an educational website in science classrooms and how teachers and students viewed the use of videos. The study involved lesson observations in nine different classrooms, student and teacher interviews, and teacher focus group discussions. Multiple qualitative data were analysed…
Watermarking 3D Objects for Verification
1999-01-01
signal (audio/ image /video) pro- cessing and steganography fields, and even newer to the computer graphics community. Inherently, digital watermarking of...quality images , and digital video. The field of digital watermarking is relatively new, and many of its terms have not been well defined. Among the dif...ferent media types, watermarking of 2D still images is comparatively better studied. Inherently, digital water- marking of 3D objects remains a
NASA Astrophysics Data System (ADS)
Chidananda, H.; Reddy, T. Hanumantha
2017-06-01
This paper presents a natural representation of numerical digit(s) using hand activity analysis based on number of fingers out stretched for each numerical digit in sequence extracted from a video. The analysis is based on determining a set of six features from a hand image. The most important features used from each frame in a video are the first fingertip from top, palm-line, palm-center, valley points between the fingers exists above the palm-line. Using this work user can convey any number of numerical digits using right or left or both the hands naturally in a video. Each numerical digit ranges from 0 to9. Hands (right/left/both) used to convey digits can be recognized accurately using the valley points and with this recognition whether the user is a right / left handed person in practice can be analyzed. In this work, first the hand(s) and face parts are detected by using YCbCr color space and face part is removed by using ellipse based method. Then, the hand(s) are analyzed to recognize the activity that represents a series of numerical digits in a video. This work uses pixel continuity algorithm using 2D coordinate geometry system and does not use regular use of calculus, contours, convex hull and datasets.
An Unequal Secure Encryption Scheme for H.264/AVC Video Compression Standard
NASA Astrophysics Data System (ADS)
Fan, Yibo; Wang, Jidong; Ikenaga, Takeshi; Tsunoo, Yukiyasu; Goto, Satoshi
H.264/AVC is the newest video coding standard. There are many new features in it which can be easily used for video encryption. In this paper, we propose a new scheme to do video encryption for H.264/AVC video compression standard. We define Unequal Secure Encryption (USE) as an approach that applies different encryption schemes (with different security strength) to different parts of compressed video data. This USE scheme includes two parts: video data classification and unequal secure video data encryption. Firstly, we classify the video data into two partitions: Important data partition and unimportant data partition. Important data partition has small size with high secure protection, while unimportant data partition has large size with low secure protection. Secondly, we use AES as a block cipher to encrypt the important data partition and use LEX as a stream cipher to encrypt the unimportant data partition. AES is the most widely used symmetric cryptography which can ensure high security. LEX is a new stream cipher which is based on AES and its computational cost is much lower than AES. In this way, our scheme can achieve both high security and low computational cost. Besides the USE scheme, we propose a low cost design of hybrid AES/LEX encryption module. Our experimental results show that the computational cost of the USE scheme is low (about 25% of naive encryption at Level 0 with VEA used). The hardware cost for hybrid AES/LEX module is 4678 Gates and the AES encryption throughput is about 50Mbps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teuton, Jeremy R.; Griswold, Richard L.; Mehdi, Beata L.
Precise analysis of both (S)TEM images and video are time and labor intensive processes. As an example, determining when crystal growth and shrinkage occurs during the dynamic process of Li dendrite deposition and stripping involves manually scanning through each frame in the video to extract a specific set of frames/images. For large numbers of images, this process can be very time consuming, so a fast and accurate automated method is desirable. Given this need, we developed software that uses analysis of video compression statistics for detecting and characterizing events in large data sets. This software works by converting the datamore » into a series of images which it compresses into an MPEG-2 video using the open source “avconv” utility [1]. The software does not use the video itself, but rather analyzes the video statistics from the first pass of the video encoding that avconv records in the log file. This file contains statistics for each frame of the video including the frame quality, intra-texture and predicted texture bits, forward and backward motion vector resolution, among others. In all, avconv records 15 statistics for each frame. By combining different statistics, we have been able to detect events in various types of data. We have developed an interactive tool for exploring the data and the statistics that aids the analyst in selecting useful statistics for each analysis. Going forward, an algorithm for detecting and possibly describing events automatically can be written based on statistic(s) for each data type.« less
Converting laserdisc video to digital video: a demonstration project using brain animations.
Jao, C S; Hier, D B; Brint, S U
1995-01-01
Interactive laserdiscs are of limited value in large group learning situations due to the expense of establishing multiple workstations. The authors implemented an alternative to laserdisc video by using indexed digital video combined with an expert system. High-quality video was captured from a laserdisc player and combined with waveform audio into an audio-video-interleave (AVI) file format in the Microsoft Video-for-Windows environment (Microsoft Corp., Seattle, WA). With the use of an expert system, a knowledge-based computer program provided random access to these indexed AVI files. The program can be played on any multimedia computer without the need for laserdiscs. This system offers a high level of interactive video without the overhead and cost of a laserdisc player.
iTRAC : intelligent video compression for automated traffic surveillance systems.
DOT National Transportation Integrated Search
2010-08-01
Non-intrusive video imaging sensors are commonly used in traffic monitoring : and surveillance. For some applications it is necessary to transmit the video : data over communication links. However, due to increased requirements of : bitrate this mean...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Wu-chi; Crawfis, Roger, Weide, Bruce
2002-02-01
In this project, the authors propose the research, development, and distribution of a stackable component-based multimedia streaming protocol middleware service. The goals of this stackable middleware interface include: (1) The middleware service will provide application writers and scientists easy to use interfaces that support their visualization needs. (2) The middleware service will support a variety of image compression modes. Currently, many of the network adaptation protocols for video have been developed with DCT-based compression algorithms like H.261, MPEG-1, or MPEG-2 in mind. It is expected that with advanced scientific computing applications that the lossy compression of the image data willmore » be unacceptable in certain instances. The middleware service will support several in-line lossless compression modes for error-sensitive scientific visualization data. (3) The middleware service will support two different types of streaming video modes: one for interactive collaboration of scientists and a stored video streaming mode for viewing prerecorded animations. The use of two different streaming types will allow the quality of the video delivered to the user to be maximized. Most importantly, this service will happen transparently to the user (with some basic controls exported to the user for domain specific tweaking). In the spirit of layered network protocols (like ISO and TCP/IP), application writers should not have to know a large amount about lower level network details. Currently, many example video streaming players have their congestion management techniques tightly integrated into the video player itself and are, for the most part, ''one-off'' applications. As more networked multimedia and video applications are written in the future, a larger percentage of these programmers and scientist will most likely know little about the underlying networking layer. By providing a simple, powerful, and semi-transparent middleware layer, the successful completion of this project will help serve as a catalyst to support future video-based applications, particularly those of advanced scientific computing applications.« less
NASA Astrophysics Data System (ADS)
Liu, Yu; Lin, Xiaocheng; Fan, Nianfei; Zhang, Lin
2016-01-01
Wireless video multicast has become one of the key technologies in wireless applications. But the main challenge of conventional wireless video multicast, i.e., the cliff effect, remains unsolved. To overcome the cliff effect, a hybrid digital-analog (HDA) video transmission framework based on SoftCast, which transmits the digital bitstream with the quantization residuals, is proposed. With an effective power allocation algorithm and appropriate parameter settings, the residual gains can be maximized; meanwhile, the digital bitstream can assure transmission of a basic video to the multicast receiver group. In the multiple-input multiple-output (MIMO) system, since nonuniform noise interference on different antennas can be regarded as the cliff effect problem, ParCast, which is a variation of SoftCast, is also applied to video transmission to solve it. The HDA scheme with corresponding power allocation algorithms is also applied to improve video performance. Simulations show that the proposed HDA scheme can overcome the cliff effect completely with the transmission of residuals. What is more, it outperforms the compared WSVC scheme by more than 2 dB when transmitting under the same bandwidth, and it can further improve performance by nearly 8 dB in MIMO when compared with the ParCast scheme.
ATM Quality of Service Tests for Digitized Video Using ATM Over Satellite: Laboratory Tests
NASA Technical Reports Server (NTRS)
Ivancic, William D.; Brooks, David E.; Frantz, Brian D.
1997-01-01
A digitized video application was used to help determine minimum quality of service parameters for asynchronous transfer mode (ATM) over satellite. For these tests, binomially distributed and other errors were digitally inserted in an intermediate frequency link via a satellite modem and a commercial gaussian noise generator. In this paper, the relation- ship between the ATM cell error and cell loss parameter specifications is discussed with regard to this application. In addition, the video-encoding algorithms, test configurations, and results are presented in detail.
Estimating JPEG2000 compression for image forensics using Benford's Law
NASA Astrophysics Data System (ADS)
Qadir, Ghulam; Zhao, Xi; Ho, Anthony T. S.
2010-05-01
With the tremendous growth and usage of digital images nowadays, the integrity and authenticity of digital content is becoming increasingly important, and a growing concern to many government and commercial sectors. Image Forensics, based on a passive statistical analysis of the image data only, is an alternative approach to the active embedding of data associated with Digital Watermarking. Benford's Law was first introduced to analyse the probability distribution of the 1st digit (1-9) numbers of natural data, and has since been applied to Accounting Forensics for detecting fraudulent income tax returns [9]. More recently, Benford's Law has been further applied to image processing and image forensics. For example, Fu et al. [5] proposed a Generalised Benford's Law technique for estimating the Quality Factor (QF) of JPEG compressed images. In our previous work, we proposed a framework incorporating the Generalised Benford's Law to accurately detect unknown JPEG compression rates of watermarked images in semi-fragile watermarking schemes. JPEG2000 (a relatively new image compression standard) offers higher compression rates and better image quality as compared to JPEG compression. In this paper, we propose the novel use of Benford's Law for estimating JPEG2000 compression for image forensics applications. By analysing the DWT coefficients and JPEG2000 compression on 1338 test images, the initial results indicate that the 1st digit probability of DWT coefficients follow the Benford's Law. The unknown JPEG2000 compression rates of the image can also be derived, and proved with the help of a divergence factor, which shows the deviation between the probabilities and Benford's Law. Based on 1338 test images, the mean divergence for DWT coefficients is approximately 0.0016, which is lower than DCT coefficients at 0.0034. However, the mean divergence for JPEG2000 images compression rate at 0.1 is 0.0108, which is much higher than uncompressed DWT coefficients. This result clearly indicates a presence of compression in the image. Moreover, we compare the results of 1st digit probability and divergence among JPEG2000 compression rates at 0.1, 0.3, 0.5 and 0.9. The initial results show that the expected difference among them could be used for further analysis to estimate the unknown JPEG2000 compression rates.
Intelligent bandwith compression
NASA Astrophysics Data System (ADS)
Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.
1980-02-01
The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 band width-compressed images are presented. A video tape simulation of the Intelligent Bandwidth Compression system has been produced using a sequence of video input from the data base.
Taking Digital Creativity to the Art Classroom: Mystery Box Swap
ERIC Educational Resources Information Center
Shin, Ryan
2010-01-01
Today's students are the first generation to grow up with computers, cell-phones, video games, music and video players, and other digital technologies. As "digital natives", a new term coined by Prensky (2001), they spend more time reading text messaging lines than lines from books, and they spend more time on Facebook than putting their energies…
Remote driving with reduced bandwidth communication
NASA Technical Reports Server (NTRS)
Depiero, Frederick W.; Noell, Timothy E.; Gee, Timothy F.
1993-01-01
Oak Ridge National Laboratory has developed a real-time video transmission system for low bandwidth remote operations. The system supports both continuous transmission of video for remote driving and progressive transmission of still images. Inherent in the system design is a spatiotemporal limitation to the effects of channel errors. The average data rate of the system is 64,000 bits/s, a compression of approximately 1000:1 for the black and white National Television Standard Code video. The image quality of the transmissions is maintained at a level that supports teleoperation of a high mobility multipurpose wheeled vehicle at speeds up to 15 mph on a moguled dirt track. Video compression is achieved by using Laplacian image pyramids and a combination of classical techniques. Certain subbands of the image pyramid are transmitted by using interframe differencing with a periodic refresh to aid in bandwidth reduction. Images are also foveated to concentrate image detail in a steerable region. The system supports dynamic video quality adjustments between frame rate, image detail, and foveation rate. A typical configuration for the system used during driving has a frame rate of 4 Hz, a compression per frame of 125:1, and a resulting latency of less than 1s.
Using Online Digital Tools and Video to Support International Problem-Based Learning
ERIC Educational Resources Information Center
Lajoie, Susanne P.; Hmelo-Silver, Cindy; Wiseman, Jeffrey; Chan, Lap Ki; Lu, Jingyan; Khurana, Chesta; Cruz-Panesso, Ilian; Poitras, Eric; Kazemitabar, Maedeh
2014-01-01
The goal of this study is to examine how to facilitate cross-cultural groups in problem-based learning (PBL) using online digital tools and videos. The PBL consisted of two video-based cases used to trigger student-learning issues about giving bad news to HIV-positive patients. Mixed groups of medical students from Canada and Hong Kong worked with…
Using Video in Urban Elementary Professional Development to Support Digital Media Arts Integration
ERIC Educational Resources Information Center
Woodard, Rebecca; Machado, Emily
2017-01-01
Using ethnographic methods, this article looks closely at how a team of first-grade teachers and digital media artists in an urban elementary school used video in innovative ways during professional development over the course of one year. Extending a body of literature that primarily documents how video can be used as a tool in professional…
Evaluating video digitizer errors
NASA Astrophysics Data System (ADS)
Peterson, C.
2016-01-01
Analog output video cameras remain popular for recording meteor data. Although these cameras uniformly employ electronic detectors with fixed pixel arrays, the digitization process requires resampling the horizontal lines as they are output in order to reconstruct the pixel data, usually resulting in a new data array of different horizontal dimensions than the native sensor. Pixel timing is not provided by the camera, and must be reconstructed based on line sync information embedded in the analog video signal. Using a technique based on hot pixels, I present evidence that jitter, sync detection, and other timing errors introduce both position and intensity errors which are not present in cameras which internally digitize their sensors and output the digital data directly.
JPEG XS call for proposals subjective evaluations
NASA Astrophysics Data System (ADS)
McNally, David; Bruylants, Tim; Willème, Alexandre; Ebrahimi, Touradj; Schelkens, Peter; Macq, Benoit
2017-09-01
In March 2016 the Joint Photographic Experts Group (JPEG), formally known as ISO/IEC SC29 WG1, issued a call for proposals soliciting compression technologies for a low-latency, lightweight and visually transparent video compression scheme. Within the JPEG family of standards, this scheme was denominated JPEG XS. The subjective evaluation of visually lossless compressed video sequences at high resolutions and bit depths poses particular challenges. This paper describes the adopted procedures, the subjective evaluation setup, the evaluation process and summarizes the obtained results which were achieved in the context of the JPEG XS standardization process.
Video Imaging System Particularly Suited for Dynamic Gear Inspection
NASA Technical Reports Server (NTRS)
Broughton, Howard (Inventor)
1999-01-01
A digital video imaging system that captures the image of a single tooth of interest of a rotating gear is disclosed. The video imaging system detects the complete rotation of the gear and divide that rotation into discrete time intervals so that each tooth of interest of the gear is precisely determined when it is at a desired location that is illuminated in unison with a digital video camera so as to record a single digital image for each tooth. The digital images are available to provide instantaneous analysis of the tooth of interest, or to be stored and later provide images that yield a history that may be used to predict gear failure, such as gear fatigue. The imaging system is completely automated by a controlling program so that it may run for several days acquiring images without supervision from the user.
Compressed digital holography: from micro towards macro
NASA Astrophysics Data System (ADS)
Schretter, Colas; Bettens, Stijn; Blinder, David; Pesquet-Popescu, Béatrice; Cagnazzo, Marco; Dufaux, Frédéric; Schelkens, Peter
2016-09-01
signal processing methods from software-driven computer engineering and applied mathematics. The compressed sensing theory in particular established a practical framework for reconstructing the scene content using few linear combinations of complex measurements and a sparse prior for regularizing the solution. Compressed sensing found direct applications in digital holography for microscopy. Indeed, the wave propagation phenomenon in free space mixes in a natural way the spatial distribution of point sources from the 3-dimensional scene. As the 3-dimensional scene is mapped to a 2-dimensional hologram, the hologram samples form a compressed representation of the scene as well. This overview paper discusses contributions in the field of compressed digital holography at the micro scale. Then, an outreach on future extensions towards the real-size macro scale is discussed. Thanks to advances in sensor technologies, increasing computing power and the recent improvements in sparse digital signal processing, holographic modalities are on the verge of practical high-quality visualization at a macroscopic scale where much higher resolution holograms must be acquired and processed on the computer.
Wavelet/scalar quantization compression standard for fingerprint images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, C.M.
1996-06-12
US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class ofmore » potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.« less
An analysis of technology usage for streaming digital video in support of a preclinical curriculum.
Dev, P; Rindfleisch, T C; Kush, S J; Stringer, J R
2000-01-01
Usage of streaming digital video of lectures in preclinical courses was measured by analysis of the data in the log file maintained on the web server. We observed that students use the video when it is available. They do not use it to replace classroom attendance but rather for review before examinations or when a class has been missed. Usage of video has not increased significantly for any course within the 18 month duration of this project.
Bynum, Ann B; Cranford, Charles O; Irwin, Cathy A; Denny, George S
2002-08-01
Socioeconomic and demographic factors can affect the impact of telehealth education programs that use interactive compressed video technology. This study assessed program satisfaction among participants in the University of Arkansas for Medical Sciences' School Telehealth Education Program delivered by interactive compressed video. Variables in the one-group posttest study were age, gender, ethnicity, education, community size, and program topics for years 1997-1999. The convenience sample included 3,319 participants in junior high and high schools. The School Telehealth Education Program provided information about health risks, disease prevention, health promotion, personal growth, and health sciences. Adolescents reported medium to high levels of satisfaction regarding program interest and quality. Significantly higher satisfaction was expressed for programs on muscular dystrophy, anatomy of the heart, and tobacco addiction (p < 0.001 to p = 0.003). Females, African Americans, and junior high school students reported significantly greater satisfaction (p < 0.001 to p = 0.005). High school students reported significantly greater satisfaction than junior high school students regarding the interactive video equipment (p = 0.011). White females (p = 0.025) and African American males (p = 0.004) in smaller, rural communities reported higher satisfaction than White males. The School Telehealth Education Program, delivered by interactive compressed video, promoted program satisfaction among rural and minority populations and among junior high and high school students. Effective program methods included an emphasis on participants' learning needs, increasing access in rural areas among ethnic groups, speaker communication, and clarity of the program presentation.
Depth assisted compression of full parallax light fields
NASA Astrophysics Data System (ADS)
Graziosi, Danillo B.; Alpaslan, Zahir Y.; El-Ghoroury, Hussein S.
2015-03-01
Full parallax light field displays require high pixel density and huge amounts of data. Compression is a necessary tool used by 3D display systems to cope with the high bandwidth requirements. One of the formats adopted by MPEG for 3D video coding standards is the use of multiple views with associated depth maps. Depth maps enable the coding of a reduced number of views, and are used by compression and synthesis software to reconstruct the light field. However, most of the developed coding and synthesis tools target linearly arranged cameras with small baselines. Here we propose to use the 3D video coding format for full parallax light field coding. We introduce a view selection method inspired by plenoptic sampling followed by transform-based view coding and view synthesis prediction to code residual views. We determine the minimal requirements for view sub-sampling and present the rate-distortion performance of our proposal. We also compare our method with established video compression techniques, such as H.264/AVC, H.264/MVC, and the new 3D video coding algorithm, 3DV-ATM. Our results show that our method not only has an improved rate-distortion performance, it also preserves the structure of the perceived light fields better.
2015-01-01
streak tube imaging Lidar [15]. Nevertheless, instead of one- dimensional (1D) fan beam, a laser source modulates the digital micromirror device DMD and...Trans. Inform. Theory, vol. 52, pp. 1289-1306, 2006. [10] D. Dudley, W. Duncan and J. Slaughter, "Emerging Digital Micromirror Device (DMD) Applications
Shaw, S L; Salmon, E D; Quatrano, R S
1995-12-01
In this report, we describe a relatively inexpensive method for acquiring, storing and processing light microscope images that combines the advantages of video technology with the powerful medium now termed digital photography. Digital photography refers to the recording of images as digital files that are stored, manipulated and displayed using a computer. This report details the use of a gated video-rate charge-coupled device (CCD) camera and a frame grabber board for capturing 256 gray-level digital images from the light microscope. This camera gives high-resolution bright-field, phase contrast and differential interference contrast (DIC) images but, also, with gated on-chip integration, has the capability to record low-light level fluorescent images. The basic components of the digital photography system are described, and examples are presented of fluorescence and bright-field micrographs. Digital processing of images to remove noise, to enhance contrast and to prepare figures for printing is discussed.
Digital video steganalysis exploiting collusion sensitivity
NASA Astrophysics Data System (ADS)
Budhia, Udit; Kundur, Deepa
2004-09-01
In this paper we present an effective steganalyis technique for digital video sequences based on the collusion attack. Steganalysis is the process of detecting with a high probability and low complexity the presence of covert data in multimedia. Existing algorithms for steganalysis target detecting covert information in still images. When applied directly to video sequences these approaches are suboptimal. In this paper, we present a method that overcomes this limitation by using redundant information present in the temporal domain to detect covert messages in the form of Gaussian watermarks. Our gains are achieved by exploiting the collusion attack that has recently been studied in the field of digital video watermarking, and more sophisticated pattern recognition tools. Applications of our scheme include cybersecurity and cyberforensics.
NASA Astrophysics Data System (ADS)
Gunay, Omer; Ozsarac, Ismail; Kamisli, Fatih
2017-05-01
Video recording is an essential property of new generation military imaging systems. Playback of the stored video on the same device is also desirable as it provides several operational benefits to end users. Two very important constraints for many military imaging systems, especially for hand-held devices and thermal weapon sights, are power consumption and size. To meet these constraints, it is essential to perform most of the processing applied to the video signal, such as preprocessing, compression, storing, decoding, playback and other system functions on a single programmable chip, such as FPGA, DSP, GPU or ASIC. In this work, H.264/AVC (Advanced Video Coding) compatible video compression, storage, decoding and playback blocks are efficiently designed and implemented on FPGA platforms using FPGA fabric and Altera NIOS II soft processor. Many subblocks that are used in video encoding are also used during video decoding in order to save FPGA resources and power. Computationally complex blocks are designed using FPGA fabric, while blocks such as SD card write/read, H.264 syntax decoding and CAVLC decoding are done using NIOS processor to benefit from software flexibility. In addition, to keep power consumption low, the system was designed to require limited external memory access. The design was tested using 640x480 25 fps thermal camera on CYCLONE V FPGA, which is the ALTERA's lowest power FPGA family, and consumes lower than 40% of CYCLONE V 5CEFA7 FPGA resources on average.
ERIC Educational Resources Information Center
Hobbs, Renee; Donnelly, Katie; Friesem, Jonathan; Moen, Mary
2013-01-01
Many students enroll in video production courses in high school as part of a vocational, career, or technical program. While there has been an explosion of scholarly work in digital literacy in informal settings, less is known about how digital and media literacy competencies are developed through school-based video production courses. This study…
Intelligent vehicle control: Opportunities for terrestrial-space system integration
NASA Technical Reports Server (NTRS)
Shoemaker, Charles
1994-01-01
For 11 years the Department of Defense has cooperated with a diverse array of other Federal agencies including the National Institute of Standards and Technology, the Jet Propulsion Laboratory, and the Department of Energy, to develop robotics technology for unmanned ground systems. These activities have addressed control system architectures supporting sharing of tasks between the system operator and various automated subsystems, man-machine interfaces to intelligent vehicles systems, video compression supporting vehicle driving in low data rate digital communication environments, multiple simultaneous vehicle control by a single operator, path planning and retrace, and automated obstacle detection and avoidance subsystem. Performance metrics and test facilities for robotic vehicles were developed permitting objective performance assessment of a variety of operator-automated vehicle control regimes. Progress in these areas will be described in the context of robotic vehicle testbeds specifically developed for automated vehicle research. These initiatives, particularly as regards the data compression, task sharing, and automated mobility topics, also have relevance in the space environment. The intersection of technology development interests between these two communities will be discussed in this paper.
Integrating TV/digital data spectrograph system
NASA Technical Reports Server (NTRS)
Duncan, B. J.; Fay, T. D.; Miller, E. R.; Wamsteker, W.; Brown, R. M.; Neely, P. L.
1975-01-01
A 25-mm vidicon camera was previously modified to allow operation in an integration mode for low-light-level astronomical work. The camera was then mated to a low-dispersion spectrograph for obtaining spectral information in the 400 to 750 nm range. A high speed digital video image system was utilized to digitize the analog video signal, place the information directly into computer-type memory, and record data on digital magnetic tape for permanent storage and subsequent analysis.
ERIC Educational Resources Information Center
Granger, Stewart; Dekkers, Makx; Weibel, Stuart L.; Kirriemuir, John; Lensch, Hendrik P. A.; Goesele, Michael; Seidel, Hans-Peter; Birmingham, William; Pardo, Bryan; Meek, Colin; Shifrin, Jonah; Goodvin, Renee; Lippy, Brooke
2002-01-01
One opinion piece and five articles in this issue discuss: digital preservation infrastructure; accomplishments and changes in the Dublin Core Metadata Initiative in 2001 and plans for 2002; video gaming and how it relates to digital libraries and learning technologies; overview of a music retrieval system; and the online version of the…
Compression of CCD raw images for digital still cameras
NASA Astrophysics Data System (ADS)
Sriram, Parthasarathy; Sudharsanan, Subramania
2005-03-01
Lossless compression of raw CCD images captured using color filter arrays has several benefits. The benefits include improved storage capacity, reduced memory bandwidth, and lower power consumption for digital still camera processors. The paper discusses the benefits in detail and proposes the use of a computationally efficient block adaptive scheme for lossless compression. Experimental results are provided that indicate that the scheme performs well for CCD raw images attaining compression factors of more than two. The block adaptive method also compares favorably with JPEG-LS. A discussion is provided indicating how the proposed lossless coding scheme can be incorporated into digital still camera processors enabling lower memory bandwidth and storage requirements.
Peña, Raul; Ávila, Alfonso; Muñoz, David; Lavariega, Juan
2015-01-01
The recognition of clinical manifestations in both video images and physiological-signal waveforms is an important aid to improve the safety and effectiveness in medical care. Physicians can rely on video-waveform (VW) observations to recognize difficult-to-spot signs and symptoms. The VW observations can also reduce the number of false positive incidents and expand the recognition coverage to abnormal health conditions. The synchronization between the video images and the physiological-signal waveforms is fundamental for the successful recognition of the clinical manifestations. The use of conventional equipment to synchronously acquire and display the video-waveform information involves complex tasks such as the video capture/compression, the acquisition/compression of each physiological signal, and the video-waveform synchronization based on timestamps. This paper introduces a data hiding technique capable of both enabling embedding channels and synchronously hiding samples of physiological signals into encoded video sequences. Our data hiding technique offers large data capacity and simplifies the complexity of the video-waveform acquisition and reproduction. The experimental results revealed successful embedding and full restoration of signal's samples. Our results also demonstrated a small distortion in the video objective quality, a small increment in bit-rate, and embedded cost savings of -2.6196% for high and medium motion video sequences.
Observer performance assessment of JPEG-compressed high-resolution chest images
NASA Astrophysics Data System (ADS)
Good, Walter F.; Maitz, Glenn S.; King, Jill L.; Gennari, Rose C.; Gur, David
1999-05-01
The JPEG compression algorithm was tested on a set of 529 chest radiographs that had been digitized at a spatial resolution of 100 micrometer and contrast sensitivity of 12 bits. Images were compressed using five fixed 'psychovisual' quantization tables which produced average compression ratios in the range 15:1 to 61:1, and were then printed onto film. Six experienced radiologists read all cases from the laser printed film, in each of the five compressed modes as well as in the non-compressed mode. For comparison purposes, observers also read the same cases with reduced pixel resolutions of 200 micrometer and 400 micrometer. The specific task involved detecting masses, pneumothoraces, interstitial disease, alveolar infiltrates and rib fractures. Over the range of compression ratios tested, for images digitized at 100 micrometer, we were unable to demonstrate any statistically significant decrease (p greater than 0.05) in observer performance as measured by ROC techniques. However, the observers' subjective assessments of image quality did decrease significantly as image resolution was reduced and suggested a decreasing, but nonsignificant, trend as the compression ratio was increased. The seeming discrepancy between our failure to detect a reduction in observer performance, and other published studies, is likely due to: (1) the higher resolution at which we digitized our images; (2) the higher signal-to-noise ratio of our digitized films versus typical CR images; and (3) our particular choice of an optimized quantization scheme.
Video coding for 3D-HEVC based on saliency information
NASA Astrophysics Data System (ADS)
Yu, Fang; An, Ping; Yang, Chao; You, Zhixiang; Shen, Liquan
2016-11-01
As an extension of High Efficiency Video Coding ( HEVC), 3D-HEVC has been widely researched under the impetus of the new generation coding standard in recent years. Compared with H.264/AVC, its compression efficiency is doubled while keeping the same video quality. However, its higher encoding complexity and longer encoding time are not negligible. To reduce the computational complexity and guarantee the subjective quality of virtual views, this paper presents a novel video coding method for 3D-HEVC based on the saliency informat ion which is an important part of Human Visual System (HVS). First of all, the relationship between the current coding unit and its adjacent units is used to adjust the maximum depth of each largest coding unit (LCU) and determine the SKIP mode reasonably. Then, according to the saliency informat ion of each frame image, the texture and its corresponding depth map will be divided into three regions, that is, salient area, middle area and non-salient area. Afterwards, d ifferent quantization parameters will be assigned to different regions to conduct low complexity coding. Finally, the compressed video will generate new view point videos through the renderer tool. As shown in our experiments, the proposed method saves more bit rate than other approaches and achieves up to highest 38% encoding time reduction without subjective quality loss in compression or rendering.
Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu
2016-01-01
Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127
A Macintosh-Based Scientific Images Video Analysis System
NASA Technical Reports Server (NTRS)
Groleau, Nicolas; Friedland, Peter (Technical Monitor)
1994-01-01
A set of experiments was designed at MIT's Man-Vehicle Laboratory in order to evaluate the effects of zero gravity on the human orientation system. During many of these experiments, the movements of the eyes are recorded on high quality video cassettes. The images must be analyzed off-line to calculate the position of the eyes at every moment in time. To this aim, I have implemented a simple inexpensive computerized system which measures the angle of rotation of the eye from digitized video images. The system is implemented on a desktop Macintosh computer, processes one play-back frame per second and exhibits adequate levels of accuracy and precision. The system uses LabVIEW, a digital output board, and a video input board to control a VCR, digitize video images, analyze them, and provide a user friendly interface for the various phases of the process. The system uses the Concept Vi LabVIEW library (Graftek's Image, Meudon la Foret, France) for image grabbing and displaying as well as translation to and from LabVIEW arrays. Graftek's software layer drives an Image Grabber board from Neotech (Eastleigh, United Kingdom). A Colour Adapter box from Neotech provides adequate video signal synchronization. The system also requires a LabVIEW driven digital output board (MacADIOS II from GW Instruments, Cambridge, MA) controlling a slightly modified VCR remote control used mainly to advance the video tape frame by frame.
Evaluation of smart video for transit event detection : final report.
DOT National Transportation Integrated Search
2009-06-01
Transit agencies are increasingly using video cameras to fight crime and terrorism. As the volume of video data increases, the existing digital video surveillance systems provide the infrastructure only to capture, store and distribute video, while l...
DOT National Transportation Integrated Search
2012-10-01
In this report we present a transportation video coding and wireless transmission system specically tailored to automated : vehicle tracking applications. By taking into account the video characteristics and the lossy nature of the wireless channe...
Temporal flicker reduction and denoising in video using sparse directional transforms
NASA Astrophysics Data System (ADS)
Kanumuri, Sandeep; Guleryuz, Onur G.; Civanlar, M. Reha; Fujibayashi, Akira; Boon, Choong S.
2008-08-01
The bulk of the video content available today over the Internet and over mobile networks suffers from many imperfections caused during acquisition and transmission. In the case of user-generated content, which is typically produced with inexpensive equipment, these imperfections manifest in various ways through noise, temporal flicker and blurring, just to name a few. Imperfections caused by compression noise and temporal flicker are present in both studio-produced and user-generated video content transmitted at low bit-rates. In this paper, we introduce an algorithm designed to reduce temporal flicker and noise in video sequences. The algorithm takes advantage of the sparse nature of video signals in an appropriate transform domain that is chosen adaptively based on local signal statistics. When the signal corresponds to a sparse representation in this transform domain, flicker and noise, which are spread over the entire domain, can be reduced easily by enforcing sparsity. Our results show that the proposed algorithm reduces flicker and noise significantly and enables better presentation of compressed videos.
Ho, B T; Tsai, M J; Wei, J; Ma, M; Saipetch, P
1996-01-01
A new method of video compression for angiographic images has been developed to achieve high compression ratio (~20:1) while eliminating block artifacts which leads to loss of diagnostic accuracy. This method adopts motion picture experts group's (MPEGs) motion compensated prediction to takes advantage of frame to frame correlation. However, in contrast to MPEG, the error images arising from mismatches in the motion estimation are encoded by discrete wavelet transform (DWT) rather than block discrete cosine transform (DCT). Furthermore, the authors developed a classification scheme which label each block in an image as intra, error, or background type and encode it accordingly. This hybrid coding can significantly improve the compression efficiency in certain eases. This method can be generalized for any dynamic image sequences applications sensitive to block artifacts.
Tracheal intubation using Macintosh and 2 video laryngoscopes with and without chest compressions.
Kim, Young-Min; Kim, Ji-Hoon; Kang, Hyung-Goo; Chung, Hyun Soo; Yim, Hyeon-Woo; Jeong, Seung-Hee
2011-07-01
The aim of the study was to compare the time taken for intubation (TTI) using the Macintosh and 2 video laryngoscopes (VLs) (GlideScope [GVL]; Saturn Biomedical System, Burnaby, British Columbia, Canada, and Airway Scope [AWS]; Pentax, Tokyo, Japan) with and without chest compressions by experienced intubators in a mannequin model. This was a randomized crossover study. Twenty-two experienced physicians who have limited experience in the VLs participated in the study. The TTI using 3 laryngoscopes with and without compressions were compared. Median TTI difference between 2 conditions was only significant in the AWS (1.64 seconds; P = .01). There were no significant differences in the TTI between the Macintosh and the GVL or the AWS during compressions. In a mannequin model, the Macintosh or the GVL was not affected by chest compressions. The TTI using the AWS was delayed by compressions but not clinically significant. Considering the lack of experience, 2 VLs may be useful adjuncts for intubation by experienced intubators during chest compressions. Copyright © 2011 Elsevier Inc. All rights reserved.
An Emerging Learning Design for Student-Generated "iVideos"
ERIC Educational Resources Information Center
Kearney, Matthew; Jones, Glynis; Roberts, Lynn
2012-01-01
This paper describes an emerging learning design for a popular genre of learner-generated video projects: "Ideas Videos" or "iVideos." These advocacy-style videos are short, two-minute, digital videos designed "to evoke powerful experiences about educative ideas" (Wong, Mishra, Koehler & Siebenthal, 2007, p1). We…
Fiber-channel audio video standard for military and commercial aircraft product lines
NASA Astrophysics Data System (ADS)
Keller, Jack E.
2002-08-01
Fibre channel is an emerging high-speed digital network technology that combines to make inroads into the avionics arena. The suitability of fibre channel for such applications is largely due to its flexibility in these key areas: Network topologies can be configured in point-to-point, arbitrated loop or switched fabric connections. The physical layer supports either copper or fiber optic implementations with a Bit Error Rate of less than 10-12. Multiple Classes of Service are available. Multiple Upper Level Protocols are supported. Multiple high speed data rates offer open ended growth paths providing speed negotiation within a single network. Current speeds supported by commercially available hardware are 1 and 2 Gbps providing effective data rates of 100 and 200 MBps respectively. Such networks lend themselves well to the transport of digital video and audio data. This paper summarizes an ANSI standard currently in the final approval cycle of the InterNational Committee for Information Technology Standardization (INCITS). This standard defines a flexible mechanism whereby digital video, audio and ancillary data are systematically packaged for transport over a fibre channel network. The basic mechanism, called a container, houses audio and video content functionally grouped as elements of the container called objects. Featured in this paper is a specific container mapping called Simple Parametric Digital Video (SPDV) developed particularly to address digital video in avionics systems. SPDV provides pixel-based video with associated ancillary data typically sourced by various sensors to be processed and/or distributed in the cockpit for presentation via high-resolution displays. Also highlighted in this paper is a streamlined Upper Level Protocol (ULP) called Frame Header Control Procedure (FHCP) targeted for avionics systems where the functionality of a more complex ULP is not required.
ERIC Educational Resources Information Center
Bull, Glen; Bell, Lynn
2009-01-01
The shift from analog to digital video transformed the system from a unidirectional analog broadcast to a two-way conversation, resulting in the birth of participatory media. Digital video offers new opportunities for teaching science, social studies, mathematics, and English language arts. The professional education associations for each content…
Speech Recognition for A Digital Video Library.
ERIC Educational Resources Information Center
Witbrock, Michael J.; Hauptmann, Alexander G.
1998-01-01
Production of the meta-data supporting the Informedia Digital Video Library interface is automated using techniques derived from artificial intelligence research. Speech recognition and natural-language processing, information retrieval, and image analysis are applied to produce an interface that helps users locate information and navigate more…
Method and apparatus for signal compression
Carangelo, R.M.
1994-02-08
The method and apparatus of the invention effects compression of an analog electrical signal (e.g., representing an interferogram) by introducing into it a component that is a cubic function thereof, normally as a nonlinear negative signal in a feedback loop of an Op Amp. The compressed signal will most desirably be digitized and then digitally decompressed so as to produce a signal that emulates the original. 8 figures.
Method and apparatus for signal compression
Carangelo, Robert M.
1994-02-08
The method and apparatus of the invention effects compression of an analog electrical signal (e.g., representing an interferogram) by introducing into it a component that is a cubic function thereof, normally as a nonlinear negative signal in a feedback loop of an Op Amp. The compressed signal will most desirably be digitized and then digitally decompressed so as to produce a signal that emulates the original.
Umeda, Akira; Iwata, Yasushi; Okada, Yasumasa; Shimada, Megumi; Baba, Akiyasu; Minatogawa, Yasuyuki; Yamada, Takayasu; Chino, Masao; Watanabe, Takafumi; Akaishi, Makoto
2004-12-01
The high cost of digital echocardiographs and the large size of data files hinder the adoption of remote diagnosis of digitized echocardiography data. We have developed a low-cost digital filing system for echocardiography data. In this system, data from a conventional analog echocardiograph are captured using a personal computer (PC) equipped with an analog-to-digital converter board. Motion picture data are promptly compressed using a moving pictures expert group (MPEG) 4 codec. The digitized data with preliminary reports obtained in a rural hospital are then sent to cardiologists at distant urban general hospitals via the internet. The cardiologists can evaluate the data using widely available movie-viewing software (Windows Media Player). The diagnostic accuracy of this double-check system was confirmed by comparison with ordinary super-VHS videotapes. We have demonstrated that digitization of echocardiography data from a conventional analog echocardiograph and MPEG 4 compression can be performed using an ordinary PC-based system, and that this system enables highly efficient digital storage and remote diagnosis at low cost.
Video on phone lines: technology and applications
NASA Astrophysics Data System (ADS)
Hsing, T. Russell
1996-03-01
Recent advances in communications signal processing and VLSI technology are fostering tremendous interest in transmitting high-speed digital data over ordinary telephone lines at bit rates substantially above the ISDN Basic Access rate (144 Kbit/s). Two new technologies, high-bit-rate digital subscriber lines and asymmetric digital subscriber lines promise transmission over most of the embedded loop plant at 1.544 Mbit/s and beyond. Stimulated by these research promises and rapid advances on video coding techniques and the standards activity, information networks around the globe are now exploring possible business opportunities of offering quality video services (such as distant learning, telemedicine, and telecommuting etc.) through this high-speed digital transport capability in the copper loop plant. Visual communications for residential customers have become more feasible than ever both technically and economically.
ERIC Educational Resources Information Center
Ranker, Jason
2017-01-01
This article presents an analysis of a digital video created by a student (age 13) in a classroom setting. Since sign functioning is a key focus in theories of meaning making as it occurs through language and through other modes, my analysis focuses on the relations between signifiers as they are inscribed in her video. This analysis explores new…
Tsapatsoulis, Nicolas; Loizou, Christos; Pattichis, Constantinos
2007-01-01
Efficient medical video transmission over 3G wireless is of great importance for fast diagnosis and on site medical staff training purposes. In this paper we present a region of interest based ultrasound video compression study which shows that significant reduction of the required, for transmission, bit rate can be achieved without altering the design of existing video codecs. Simple preprocessing of the original videos to define visually and clinically important areas is the only requirement.
A Comparative Study of Compression Video Technology.
ERIC Educational Resources Information Center
Keller, Chris A.; And Others
The purpose of this study was to provide an overview of compression devices used to increase the cost effectiveness of teleconferences by reducing satellite bandwidth requirements for the transmission of television pictures and accompanying audio signals. The main body of the report describes the comparison study of compression rates and their…
Encrypting Digital Camera with Automatic Encryption Key Deletion
NASA Technical Reports Server (NTRS)
Oakley, Ernest C. (Inventor)
2007-01-01
A digital video camera includes an image sensor capable of producing a frame of video data representing an image viewed by the sensor, an image memory for storing video data such as previously recorded frame data in a video frame location of the image memory, a read circuit for fetching the previously recorded frame data, an encryption circuit having an encryption key input connected to receive the previously recorded frame data from the read circuit as an encryption key, an un-encrypted data input connected to receive the frame of video data from the image sensor and an encrypted data output port, and a write circuit for writing a frame of encrypted video data received from the encrypted data output port of the encryption circuit to the memory and overwriting the video frame location storing the previously recorded frame data.
Real-time data compression of broadcast video signals
NASA Technical Reports Server (NTRS)
Shalkauser, Mary Jo W. (Inventor); Whyte, Wayne A., Jr. (Inventor); Barnes, Scott P. (Inventor)
1991-01-01
A non-adaptive predictor, a nonuniform quantizer, and a multi-level Huffman coder are incorporated into a differential pulse code modulation system for coding and decoding broadcast video signals in real time.
Desktop Video Productions. ICEM Guidelines Publications No. 6.
ERIC Educational Resources Information Center
Taufour, P. A.
Desktop video consists of integrating the processing of the video signal in a microcomputer. This definition implies that desktop video can take multiple forms such as virtual editing or digital video. Desktop video, which does not imply any particular technology, has been approached in different ways in different technical fields. It remains a…
Method and apparatus for reading meters from a video image
Lewis, Trevor J.; Ferguson, Jeffrey J.
1997-01-01
A method and system to enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relatively non-intrusive manner without making any complicated or expensive electronic connections, and without requiring intensive manpower.
Video Guidance Sensor System With Integrated Rangefinding
NASA Technical Reports Server (NTRS)
Book, Michael L. (Inventor); Bryan, Thomas C. (Inventor); Howard, Richard T. (Inventor); Roe, Fred Davis, Jr. (Inventor); Bell, Joseph L. (Inventor)
2006-01-01
A video guidance sensor system for use, p.g., in automated docking of a chase vehicle with a target vehicle. The system includes an integrated rangefinder sub-system that uses time of flight measurements to measure range. The rangefinder sub-system includes a pair of matched photodetectors for respectively detecting an output laser beam and return laser beam, a buffer memory for storing the photodetector outputs, and a digitizer connected to the buffer memory and including dual amplifiers and analog-to-digital converters. A digital signal processor processes the digitized output to produce a range measurement.
47 CFR 25.281 - Transmitter identification requirements for video uplink transmissions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... video uplink transmissions. 25.281 Section 25.281 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... identification requirements for video uplink transmissions. (a) Earth-to-space transmissions carrying video..., transmissions of fixed-frequency, digitally modulated video signals with a symbol rate of 128,000/s or more from...
2012 ARPA-E Energy Innovation Summit: Profiling Sheetak: Low Cost - Solid State Cooling
Pokharna, Himanshu; Ghoshal, Uttam
2018-05-30
The third annual ARPA-E Energy Innovation Summit was held in Washington D.C. in February, 2012. The event brought together key players from across the energy ecosystem - researchers, entrepreneurs, investors, corporate executives, and government officials - to share ideas for developing and deploying the next generation of energy technologies. A few videos were selected for showing during the Summit to attendees. These "performer videos" highlight innovative research that is ongoing and related to the main topics of the Summit's sessions. Featured in this video are David Marcus, Founder of General Compression, and Eric Ingersoll, CEO of General Compression. Himanshu Pokharna, Vice President of Sheetak Uttam Ghoshal, President and CEO of Sheetak.
Capturing Creativity Using Digital Video
ERIC Educational Resources Information Center
Toyn, Mike
2008-01-01
This paper evaluates the use of a creative learning activity in which postgraduate student teachers were required to collaboratively make short digital videos. The purpose was for student teachers to experience and evaluate a meaningful learning activity and to consider how they might reconstruct such an activity within their own teaching practice…
Digital Video Cameras for Brainstorming and Outlining: The Process and Potential
ERIC Educational Resources Information Center
Unger, John A.; Scullion, Vicki A.
2013-01-01
This "Voices from the Field" paper presents methods and participant-exemplar data for integrating digital video cameras into the writing process across postsecondary literacy contexts. The methods and participant data are part of an ongoing action-based research project systematically designed to bring research and theory into practice…
GEONETCast Americas - Architecture
service uses the commercial Intelsat-9 (IS-9) satellite to broadcast environmental and other observation found here. Users throughout the region can pick up the broadcast using inexpensive satellite receiver stations based on Digital Video Broadcast (DVB) standards (Digital Video Broadcast  Satellite or DVB-S
ERIC Educational Resources Information Center
Van Horn, Royal
2001-01-01
Several years after the first audiovisual Macintosh computer appeared, most educators are still oblivious of this technology. Almost every other economic sector (including the porn industry) makes abundant use of digital and streaming video. Desktop movie production is so easy that primary grade students can do it. Tips are provided. (MLH)
Video Making, Production Pedagogies, and Educational Policy
ERIC Educational Resources Information Center
Smythe, Suzanne; Toohey, Kelleen; Dagenais, Diane
2016-01-01
The promise of "21st century learning" is that digital technologies will transform traditional learning and mobilize skills deemed necessary in an emerging digital culture. In two case studies of video making, one in a Grade 4 classroom, and one in an adult literacy setting, the authors develop the concept of "production…
Prototype system of secure VOD
NASA Astrophysics Data System (ADS)
Minemura, Harumi; Yamaguchi, Tomohisa
1997-12-01
Secure digital contents delivery systems are to realize copyright protection and charging mechanism, and aim at secure delivery service of digital contents. Encrypted contents delivery and history (log) management are means to accomplish this purpose. Our final target is to realize a video-on-demand (VOD) system that can prevent illegal usage of video data and manage user history data to achieve a secure video delivery system on the Internet or Intranet. By now, mainly targeting client-server systems connected with enterprise LAN, we have implemented and evaluated a prototype system based on the investigation into the delivery method of encrypted video contents.
Video Data Compression Study for Remote Sensors
1976-02-01
Information Tleory, ielnvie, V. Y,, .niar 28-31. [25% T. S, Huang and J, W. Woods, "Picture Bandwitdth Compresston by Linear Transfor- mktion and Block...U.S. DEPARTMENT OF COMMERCE National Technical Information Service AD-A023 845 VIDEO DATA COMPRESSION STUDY FOR REMOTE SENSORS .4 OHIO UNIVERSITY...eport has becn review~ed by the inforxnation Gf filct ; andI is rele’Rroa"v !:r t, e Nation,ýl Terhnilcal Information Service (flTIS) . SAt -w1K, i~: ll
Content-based video retrieval by example video clip
NASA Astrophysics Data System (ADS)
Dimitrova, Nevenka; Abdel-Mottaleb, Mohamed
1997-01-01
This paper presents a novel approach for video retrieval from a large archive of MPEG or Motion JPEG compressed video clips. We introduce a retrieval algorithm that takes a video clip as a query and searches the database for clips with similar contents. Video clips are characterized by a sequence of representative frame signatures, which are constructed from DC coefficients and motion information (`DC+M' signatures). The similarity between two video clips is determined by using their respective signatures. This method facilitates retrieval of clips for the purpose of video editing, broadcast news retrieval, or copyright violation detection.
Whiplash syndrome: kinematic factors influencing pain patterns.
Cusick, J F; Pintar, F A; Yoganandan, N
2001-06-01
The overall, local, and segmental kinematic responses of intact human cadaver head-neck complexes undergoing an inertia-type rear-end impact were quantified. High-speed, high-resolution digital video data of individual facet joint motions during the event were statistically evaluated. To deduce the potential for various vertebral column components to be exposed to adverse strains that could result in their participation as pain generators, and to evaluate the abnormal motions that occur during this traumatic event. The vertebral column is known to incur a nonphysiologic curvature during the application of an inertial-type rear-end impact. No previous studies, however, have quantified the local component motions (facet joint compression and sliding) that occur as a result of rear-impact loading. Intact human cadaver head-neck complexes underwent inertia-type rear-end impact with predominant moments in the sagittal plane. High-resolution digital video was used to track the motions of individual facet joints during the event. Localized angular motion changes at each vertebral segment were analyzed to quantify the abnormal curvature changes. Facet joint motions were analyzed statistically to obtain differences between anterior and posterior strains. The spine initially assumed an S-curve, with the upper spinal levels in flexion and the lower spinal levels in extension. The upper C-spine flexion occurred early in the event (approximately 60 ms) during the time the head maintained its static inertia. The lower cervical spine facet joints demonstrated statistically greater compressive motions in the dorsal aspect than in the ventral aspect, whereas the sliding anteroposterior motions were the same. The nonphysiologic kinematic responses during a whiplash impact may induce stresses in certain upper cervical neural structures or lower facet joints, resulting in possible compromise sufficient to elicit either neuropathic or nociceptive pain. These dynamic alterations of the upper level (occiput to C2) could impart potentially adverse forces to related neural structures, with subsequent development of a neuropathic pain process. The pinching of the lower facet joints may lead to potential for local tissue injury and nociceptive pain.
Comparison of compression efficiency between HEVC/H.265 and VP9 based on subjective assessments
NASA Astrophysics Data System (ADS)
Řeřábek, Martin; Ebrahimi, Touradj
2014-09-01
Current increasing effort of broadcast providers to transmit UHD (Ultra High Definition) content is likely to increase demand for ultra high definition televisions (UHDTVs). To compress UHDTV content, several alternative encoding mechanisms exist. In addition to internationally recognized standards, open access proprietary options, such as VP9 video encoding scheme, have recently appeared and are gaining popularity. One of the main goals of these encoders is to efficiently compress video sequences beyond HDTV resolution for various scenarios, such as broadcasting or internet streaming. In this paper, a broadcast scenario rate-distortion performance analysis and mutual comparison of one of the latest video coding standards H.265/HEVC with recently released proprietary video coding scheme VP9 is presented. Also, currently one of the most popular and widely spread encoder H.264/AVC has been included into the evaluation to serve as a comparison baseline. The comparison is performed by means of subjective evaluations showing actual differences between encoding algorithms in terms of perceived quality. The results indicate a general dominance of HEVC based encoding algorithm in comparison to other alternatives, while VP9 and AVC showing similar performance.
ERIC Educational Resources Information Center
Phung, Dan; Valetto, Giuseppe; Kaiser, Gail E.; Liu, Tiecheng; Kender, John R.
2007-01-01
The increasing popularity of online courses has highlighted the need for collaborative learning tools for student groups. In this article, we present an e-Learning architecture and adaptation model called AI2TV (Adaptive Interactive Internet Team Video), which allows groups of students to collaboratively view instructional videos in synchrony.…
Learner-Generated Digital Video: Using Ideas Videos in Teacher Education
ERIC Educational Resources Information Center
Kearney, Matthew
2013-01-01
This qualitative study investigates the efficacy of "Ideas Videos" (or "iVideos") in pre-service teacher education. It explores the experiences of student teachers and their lecturer engaging with this succinct, advocacy-style video genre designed to evoke emotions about powerful ideas in Education (Wong, Mishra, Koehler, &…
Standardized access, display, and retrieval of medical video
NASA Astrophysics Data System (ADS)
Bellaire, Gunter; Steines, Daniel; Graschew, Georgi; Thiel, Andreas; Bernarding, Johannes; Tolxdorff, Thomas; Schlag, Peter M.
1999-05-01
The system presented here enhances documentation and data- secured, second-opinion facilities by integrating video sequences into DICOM 3.0. We present an implementation for a medical video server extended by a DICOM interface. Security mechanisms conforming with DICOM are integrated to enable secure internet access. Digital video documents of diagnostic and therapeutic procedures should be examined regarding the clip length and size necessary for second opinion and manageable with today's hardware. Image sources relevant for this paper include 3D laparoscope, 3D surgical microscope, 3D open surgery camera, synthetic video, and monoscopic endoscopes, etc. The global DICOM video concept and three special workplaces of distinct applications are described. Additionally, an approach is presented to analyze the motion of the endoscopic camera for future automatic video-cutting. Digital stereoscopic video sequences are especially in demand for surgery . Therefore DSVS are also integrated into the DICOM video concept. Results are presented describing the suitability of stereoscopic display techniques for the operating room.
Acceptable bit-rates for human face identification from CCTV imagery
NASA Astrophysics Data System (ADS)
Tsifouti, Anastasia; Triantaphillidou, Sophie; Bilissi, Efthimia; Larabi, Mohamed-Chaker
2013-01-01
The objective of this investigation is to produce recommendations for acceptable bit-rates of CCTV footage of people onboard London buses. The majority of CCTV recorders on buses use a proprietary format based on the H.264/AVC video coding standard, exploiting both spatial and temporal redundancy. Low bit-rates are favored in the CCTV industry but they compromise the image usefulness of the recorded imagery. In this context usefulness is defined by the presence of enough facial information remaining in the compressed image to allow a specialist to identify a person. The investigation includes four steps: 1) Collection of representative video footage. 2) The grouping of video scenes based on content attributes. 3) Psychophysical investigations to identify key scenes, which are most affected by compression. 4) Testing of recording systems using the key scenes and further psychophysical investigations. The results are highly dependent upon scene content. For example, very dark and very bright scenes were the most challenging to compress, requiring higher bit-rates to maintain useful information. The acceptable bit-rates are also found to be dependent upon the specific CCTV system used to compress the footage, presenting challenges in drawing conclusions about universal `average' bit-rates.
Prediction-guided quantization for video tone mapping
NASA Astrophysics Data System (ADS)
Le Dauphin, Agnès.; Boitard, Ronan; Thoreau, Dominique; Olivier, Yannick; Francois, Edouard; LeLéannec, Fabrice
2014-09-01
Tone Mapping Operators (TMOs) compress High Dynamic Range (HDR) content to address Low Dynamic Range (LDR) displays. However, before reaching the end-user, this tone mapped content is usually compressed for broadcasting or storage purposes. Any TMO includes a quantization step to convert floating point values to integer ones. In this work, we propose to adapt this quantization, in the loop of an encoder, to reduce the entropy of the tone mapped video content. Our technique provides an appropriate quantization for each mode of both the Intra and Inter-prediction that is performed in the loop of a block-based encoder. The mode that minimizes a rate-distortion criterion uses its associated quantization to provide integer values for the rest of the encoding process. The method has been implemented in HEVC and was tested over two different scenarios: the compression of tone mapped LDR video content (using the HM10.0) and the compression of perceptually encoded HDR content (HM14.0). Results show an average bit-rate reduction under the same PSNR for all the sequences and TMO considered of 20.3% and 27.3% for tone mapped content and 2.4% and 2.7% for HDR content.
Real-time unmanned aircraft systems surveillance video mosaicking using GPU
NASA Astrophysics Data System (ADS)
Camargo, Aldo; Anderson, Kyle; Wang, Yi; Schultz, Richard R.; Fevig, Ronald A.
2010-04-01
Digital video mosaicking from Unmanned Aircraft Systems (UAS) is being used for many military and civilian applications, including surveillance, target recognition, border protection, forest fire monitoring, traffic control on highways, monitoring of transmission lines, among others. Additionally, NASA is using digital video mosaicking to explore the moon and planets such as Mars. In order to compute a "good" mosaic from video captured by a UAS, the algorithm must deal with motion blur, frame-to-frame jitter associated with an imperfectly stabilized platform, perspective changes as the camera tilts in flight, as well as a number of other factors. The most suitable algorithms use SIFT (Scale-Invariant Feature Transform) to detect the features consistent between video frames. Utilizing these features, the next step is to estimate the homography between two consecutives video frames, perform warping to properly register the image data, and finally blend the video frames resulting in a seamless video mosaick. All this processing takes a great deal of resources of resources from the CPU, so it is almost impossible to compute a real time video mosaic on a single processor. Modern graphics processing units (GPUs) offer computational performance that far exceeds current CPU technology, allowing for real-time operation. This paper presents the development of a GPU-accelerated digital video mosaicking implementation and compares it with CPU performance. Our tests are based on two sets of real video captured by a small UAS aircraft; one video comes from Infrared (IR) and Electro-Optical (EO) cameras. Our results show that we can obtain a speed-up of more than 50 times using GPU technology, so real-time operation at a video capture of 30 frames per second is feasible.
Digital holographic image fusion for a larger size object using compressive sensing
NASA Astrophysics Data System (ADS)
Tian, Qiuhong; Yan, Liping; Chen, Benyong; Yao, Jiabao; Zhang, Shihua
2017-05-01
Digital holographic imaging fusion for a larger size object using compressive sensing is proposed. In this method, the high frequency component of the digital hologram under discrete wavelet transform is represented sparsely by using compressive sensing so that the data redundancy of digital holographic recording can be resolved validly, the low frequency component is retained totally to ensure the image quality, and multiple reconstructed images with different clear parts corresponding to a laser spot size are fused to realize the high quality reconstructed image of a larger size object. In addition, a filter combing high-pass and low-pass filters is designed to remove the zero-order term from a digital hologram effectively. The digital holographic experimental setup based on off-axis Fresnel digital holography was constructed. The feasible and comparative experiments were carried out. The fused image was evaluated by using the Tamura texture features. The experimental results demonstrated that the proposed method can improve the processing efficiency and visual characteristics of the fused image and enlarge the size of the measured object effectively.
NASA Astrophysics Data System (ADS)
Ciaramello, Francis M.; Hemami, Sheila S.
2007-02-01
For members of the Deaf Community in the United States, current communication tools include TTY/TTD services, video relay services, and text-based communication. With the growth of cellular technology, mobile sign language conversations are becoming a possibility. Proper coding techniques must be employed to compress American Sign Language (ASL) video for low-rate transmission while maintaining the quality of the conversation. In order to evaluate these techniques, an appropriate quality metric is needed. This paper demonstrates that traditional video quality metrics, such as PSNR, fail to predict subjective intelligibility scores. By considering the unique structure of ASL video, an appropriate objective metric is developed. Face and hand segmentation is performed using skin-color detection techniques. The distortions in the face and hand regions are optimally weighted and pooled across all frames to create an objective intelligibility score for a distorted sequence. The objective intelligibility metric performs significantly better than PSNR in terms of correlation with subjective responses.
Photorealistic scene presentation: virtual video camera
NASA Astrophysics Data System (ADS)
Johnson, Michael J.; Rogers, Joel Clark W.
1994-07-01
This paper presents a low cost alternative for presenting photo-realistic imagery during the final approach, which often is a peak workload phase of flight. The method capitalizes on `a priori' information. It accesses out-the-window `snapshots' from a mass storage device, selecting the snapshots that deliver the best match for a given aircraft position and runway scene. It then warps the snapshots to align them more closely with the current viewpoint. The individual snapshots, stored as highly compressed images, are decompressed and interpolated to produce a `clear-day' video stream. The paper shows how this warping, when combined with other compression methods, saves considerable amounts of storage; compression factors from 1000 to 3000 were achieved. Thus, a CD-ROM today can store reference snapshots for thousands of different runways. Dynamic scene elements not present in the snapshot database can be inserted as separate symbolic or pictorial images. When underpinned by an appropriate suite of sensor technologies, the methods discussed indicate an all-weather virtual video camera is possible.
A constrained joint source/channel coder design and vector quantization of nonstationary sources
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Chen, Y. C.; Nori, S.; Araj, A.
1993-01-01
The emergence of broadband ISDN as the network for the future brings with it the promise of integration of all proposed services in a flexible environment. In order to achieve this flexibility, asynchronous transfer mode (ATM) has been proposed as the transfer technique. During this period a study was conducted on the bridging of network transmission performance and video coding. The successful transmission of variable bit rate video over ATM networks relies on the interaction between the video coding algorithm and the ATM networks. Two aspects of networks that determine the efficiency of video transmission are the resource allocation algorithm and the congestion control algorithm. These are explained in this report. Vector quantization (VQ) is one of the more popular compression techniques to appear in the last twenty years. Numerous compression techniques, which incorporate VQ, have been proposed. While the LBG VQ provides excellent compression, there are also several drawbacks to the use of the LBG quantizers including search complexity and memory requirements, and a mismatch between the codebook and the inputs. The latter mainly stems from the fact that the VQ is generally designed for a specific rate and a specific class of inputs. In this work, an adaptive technique is proposed for vector quantization of images and video sequences. This technique is an extension of the recursively indexed scalar quantization (RISQ) algorithm.
Multimedia systems for art and culture: a case study of Brihadisvara Temple
NASA Astrophysics Data System (ADS)
Jain, Anil K.; Goel, Sanjay; Agarwal, Sachin; Mittal, Vipin; Sharma, Hariom; Mahindru, Ranjeev
1997-01-01
In India a temple is not only a structure of religious significance and celebration, but it also plays an important role in the social, administrative and cultural life of the locality. Temples have served as centers for learning Indian scriptures. Music and dance were fostered and performed in the precincts of the temples. Built at the end of the 10th century, the Brihadisvara temple signified new design methodologies. We have access to a large number of images, audio and video recordings, architectural drawings and scholarly publications of this temple. A multimedia system for this temple is being designed which is intended to be used for the following purposes: (1) to inform and enrich the general public, and (2) to assist the scholars in their research. Such a system will also preserve and archive old historical documents and images. The large database consists primarily of images which can be retrieved using keywords, but the emphasis here is largely on techniques which will allow access using image content. Besides classifying images as either long shots or close-ups, deformable template matching is used for shape-based query by image content, and digital video retrieval. Further, to exploit the non-linear accessibility of video sequences, key frames are determined to aid the domain experts in getting a quick preview of the video. Our database also has images of several old, and rare manuscripts many of which are noisy and difficult to read. We have enhanced them to make them more legible. We are also investigating the optimal trade-off between image quality and compression ratios.
Image processing for improved eye-tracking accuracy
NASA Technical Reports Server (NTRS)
Mulligan, J. B.; Watson, A. B. (Principal Investigator)
1997-01-01
Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.
NASA Astrophysics Data System (ADS)
Mansoor, Awais; Robinson, J. Paul; Rajwa, Bartek
2009-02-01
Modern automated microscopic imaging techniques such as high-content screening (HCS), high-throughput screening, 4D imaging, and multispectral imaging are capable of producing hundreds to thousands of images per experiment. For quick retrieval, fast transmission, and storage economy, these images should be saved in a compressed format. A considerable number of techniques based on interband and intraband redundancies of multispectral images have been proposed in the literature for the compression of multispectral and 3D temporal data. However, these works have been carried out mostly in the elds of remote sensing and video processing. Compression for multispectral optical microscopy imaging, with its own set of specialized requirements, has remained under-investigated. Digital photography{oriented 2D compression techniques like JPEG (ISO/IEC IS 10918-1) and JPEG2000 (ISO/IEC 15444-1) are generally adopted for multispectral images which optimize visual quality but do not necessarily preserve the integrity of scientic data, not to mention the suboptimal performance of 2D compression techniques in compressing 3D images. Herein we report our work on a new low bit-rate wavelet-based compression scheme for multispectral fluorescence biological imaging. The sparsity of signicant coefficients in high-frequency subbands of multispectral microscopic images is found to be much greater than in natural images; therefore a quad-tree concept such as Said et al.'s SPIHT1 along with correlation of insignicant wavelet coefficients has been proposed to further exploit redundancy at high-frequency subbands. Our work propose a 3D extension to SPIHT, incorporating a new hierarchal inter- and intra-spectral relationship amongst the coefficients of 3D wavelet-decomposed image. The new relationship, apart from adopting the parent-child relationship of classical SPIHT, also brought forth the conditional "sibling" relationship by relating only the insignicant wavelet coefficients of subbands at the same level of decomposition. The insignicant quadtrees in dierent subbands in the high-frequency subband class are coded by a combined function to reduce redundancy. A number of experiments conducted on microscopic multispectral images have shown promising results for the proposed method over current state-of-the-art image-compression techniques.
COxSwAIN: Compressive Sensing for Advanced Imaging and Navigation
NASA Technical Reports Server (NTRS)
Kurwitz, Richard; Pulley, Marina; LaFerney, Nathan; Munoz, Carlos
2015-01-01
The COxSwAIN project focuses on building an image and video compression scheme that can be implemented in a small or low-power satellite. To do this, we used Compressive Sensing, where the compression is performed by matrix multiplications on the satellite and reconstructed on the ground. Our paper explains our methodology and demonstrates the results of the scheme, being able to achieve high quality image compression that is robust to noise and corruption.
Code of Federal Regulations, 2011 CFR
2011-10-01
... broadcast stations, digital broadcast stations, analog cable systems, digital cable systems, wireline video systems, wireless cable systems, Direct Broadcast Satellite (DBS) services, Satellite Digital Audio Radio...
Recovery of Images from the AMOS ELSI Data for STS-33
1990-04-19
ore recorded on tape in both video and digital formats. The ELSI \\-. used on thrce passes, orbits 21, 37, and 67 on 24,2S, and 27 November. These data...November, in video fontit, were hin&narried to Gcopih)sics labontory (0L) :t the beginning or December 1989; tli cl.ified data, in digital formn.t, were...are also sampled and reconverted to maulog form, in a stanicrd viko format, for display on a video monitor and recording on videotape. 3. TAPE FORMAT
Explosive Transient Camera (ETC) Program
1991-10-01
VOLTAGES 4.- VIDEO OUT CCD CLOCKING UNIT UUPSTAIRS" ELECTRONICS AND ANALOG TO DIGITAL IPR OCECSSER I COMMANDS TO DATA AND STATUS INSTRUMENT INFORMATION I...and transmits digital video and status information to the "downstairs" system. The clocking unit and regulator/driver board are the only CCD dependent...A. 1001, " Video Cam-era’CC’" tandari Piells" (1(P’ll m-norartlum, unpublished). Condon,, J.J., Puckpan, M.A., and Vachalski, J. 1970, A. J., 9U, 1149
2016-06-25
The equipment used in this procedure includes: Ann Arbor distortion tester with 50-line grating reticule, IQeye 720 digital video camera with 12...and import them into MATLAB. In order to digitally capture images of the distortion in an optical sample, an IQeye 720 video camera with a 12... video camera and Ann Arbor distortion tester. Figure 8. Computer interface for capturing images seen by IQeye 720 camera. Once an image was
(M-CAT) Minor Caliber Weapons Trainer MK-19, 40mm Machine Gun
1989-07-24
microprocessor chip with an Intel 387 math coprocessor. The Nova 620 is a digital time base corrector. It is used to time base correct the video data...the circuit. After filtering, the horizontal and vertical position signals are converted to digital values by the Data Translation (DTX-311) analog...from the computer. Each frame of the video disk is individually digitized as to target size, location, and range. The guns azimuth and elevation are
Video Allows Young Scientists New Ways to Be Seen
ERIC Educational Resources Information Center
Park, John C.
2009-01-01
Science is frequently a visual endeavor, dependent on direct or indirect observations. Teachers have long employed motion pictures in the science classroom to allow students to make indirect observations, but the capabilities of digital video offer opportunities to engage students in active science learning. Not only can watching a digital video…
Bringing Digital Storytelling to the Elementary Classroom: Video Production for Preservice Teachers
ERIC Educational Resources Information Center
Shelton, Catharyn C.; Archambault, Leanna M.; Hale, Annie E.
2017-01-01
This study presents and evaluates a 7-week learning experience embedded in a required content-area course in a teacher preparation program, in which 31 preservice elementary teachers produced digital storytelling videos and considered how this approach may apply to their future classrooms. Qualitative and quantitative data from preservice…
ERIC Educational Resources Information Center
Swan, Kathy; Hofer, Mark
2013-01-01
Over the last several decades, social studies educators' interest and emphasis on integrating technology into teaching has increased significantly. One promising area of inquiry focuses on the benefits of student-produced digital video. A number of researchers assert that student-produced digital videos provide a variety of benefits, including…
Digital Video: The Impact on Children's Learning Experiences in Primary Physical Education
ERIC Educational Resources Information Center
O'Loughlin, Joe; Chroinin, Deirdre Ni; O'Grady, David
2013-01-01
Technology can support teaching, learning and assessment in physical education. The purpose of this study was to examine children's perspectives and experiences of using digital video in primary physical education. The impact on motivation, feedback, self-assessment and learning was examined. Twenty-three children aged 9-10 years participated in a…
Digital Video: Watch Me Do What I Say!
ERIC Educational Resources Information Center
Capraro, Robert M.; Capraro, Mary Margaret; Lamb, Charles E.
This paper establishes a use for digital video in developing preservice teacher metacognition about the teaching process using a lesson plan-rating sheet as a guide. A lesson plan was developed to meet the specific needs of the methods instructors in a professional development program at a large public institution. The categories listed on the…
Composing with New Technology: Teacher Reflections on Learning Digital Video
ERIC Educational Resources Information Center
Bruce, David L.; Chiu, Ming Ming
2015-01-01
This study explores teachers' reflections on their learning to compose with new technologies in the context of teacher education and/or teacher professional development. English language arts (ELA) teachers (n = 240) in 15 courses learned to use digital video (DV), completed at least one DV group project, and responded to open-ended survey…
Using a Digital Video Camera to Study Motion
ERIC Educational Resources Information Center
Abisdris, Gil; Phaneuf, Alain
2007-01-01
To illustrate how a digital video camera can be used to analyze various types of motion, this simple activity analyzes the motion and measures the acceleration due to gravity of a basketball in free fall. Although many excellent commercially available data loggers and software can accomplish this task, this activity requires almost no financial…
Re-Articulating the Mission and Work of the Writing Program with Digital Video
ERIC Educational Resources Information Center
Kopp, Drew; Stevens, Sharon McKenzie
2010-01-01
In this webtext, we discuss one powerful way that writing program administrators (WPAs) can start to reshape their basic rhetorical situation, potentially shifting the underlying premises that external audiences bring to discussions about writing instruction. We argue that digital video, when used strategically, is a particularly valuable medium…
Digitizing and Preserving Law School Recordings: A Duke Law Case Study
ERIC Educational Resources Information Center
White, Hollie; Bordo, Miguel; Chen, Sean
2015-01-01
Written as a case study, this article outlines Duke Law School Information Services' video digitization, preservation, and access initiative. This article begins with a discussion of the case study environment and the cross-departmental evaluation of in-house video production and processing workflows. The in-house preservation reformatting process…
Enhancing Proficiency Level Using Digital Video
ERIC Educational Resources Information Center
Fujioka-Ito, Noriko
2009-01-01
This article reports a case study where the data was collected at one university in the United States. It shows the benefits of using digital videos in intermediate-level Japanese language course curriculum so that learners can develop a higher level of proficiency. Since advanced-level speakers, according to the American Council on the Teaching…
NASA Technical Reports Server (NTRS)
Gilliland, M. G.; Rougelot, R. S.; Schumaker, R. A.
1966-01-01
Video signal processor uses special-purpose integrated circuits with nonsaturating current mode switching to accept texture and color information from a digital computer in a visual spaceflight simulator and to combine these, for display on color CRT with analog information concerning fading.
ERIC Educational Resources Information Center
Chen, Hsin-Liang; Choi, Gilok
2005-01-01
This study investigates socio-technical aspects of digital video libraries based on college students' learning experiences and perspectives. Forty-one students in biology classes were studied through a survey and individual interviews. Findings are presented by the students' knowledge of computer technology, experiences with AV materials, and…
A Software Defined Integrated T1 Digital Network for Voice, Data and Video.
ERIC Educational Resources Information Center
Hill, James R.
The Dallas County Community College District developed and implemented a strategic plan for communications that utilizes a county-wide integrated network to carry voice, data, and video information to nine locations within the district. The network, which was installed and operational by March 1987, utilizes microwave, fiber optics, digital cross…
The Pedagogy of the Observed: How Does Surveillance Technology Influence Dance Studio Education?
ERIC Educational Resources Information Center
Berg, Tanya
2015-01-01
A local trend in commercial dance studio education is the implementation of real-time digital video surveillance. This case study explores how digital video cameras in the dance studio environment affect asymmetrical power relationships already present in the commercial studio setting, as well as how surveillance impacts feminist pedagogical…
ERIC Educational Resources Information Center
Burwell, Catherine
2013-01-01
Appropriation, transformation and remix are increasingly recognized as significant aspects of digital literacy. This article considers how one form of digital remix--the video remix--might be used in classrooms to introduce critical conversations about representation, appropriation, creativity and copyright. The first half of the article explores…
Method and apparatus for reading meters from a video image
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, T.J.; Ferguson, J.J.
1995-12-31
A method and system enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relatively non-intrusivemore » manner without making any complicated or expensive electronic connections, and without requiring intensive manpower.« less
Method and apparatus for reading meters from a video image
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, T.J.; Ferguson, J.J.
1997-09-30
A method and system to enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relativelymore » non-intrusive manner without making any complicated or expensive electronic connections, and without requiring intensive manpower. 1 fig.« less
Transfer Error and Correction Approach in Mobile Network
NASA Astrophysics Data System (ADS)
Xiao-kai, Wu; Yong-jin, Shi; Da-jin, Chen; Bing-he, Ma; Qi-li, Zhou
With the development of information technology and social progress, human demand for information has become increasingly diverse, wherever and whenever people want to be able to easily, quickly and flexibly via voice, data, images and video and other means to communicate. Visual information to the people direct and vivid image, image / video transmission also been widespread attention. Although the third generation mobile communication systems and the emergence and rapid development of IP networks, making video communications is becoming the main business of the wireless communications, however, the actual wireless and IP channel will lead to error generation, such as: wireless channel multi- fading channels generated error and blocking IP packet loss and so on. Due to channel bandwidth limitations, the video communication compression coding of data is often beyond the data, and compress data after the error is very sensitive to error conditions caused a serious decline in image quality.
Resolution enhancement of low-quality videos using a high-resolution frame
NASA Astrophysics Data System (ADS)
Pham, Tuan Q.; van Vliet, Lucas J.; Schutte, Klamer
2006-01-01
This paper proposes an example-based Super-Resolution (SR) algorithm of compressed videos in the Discrete Cosine Transform (DCT) domain. Input to the system is a Low-Resolution (LR) compressed video together with a High-Resolution (HR) still image of similar content. Using a training set of corresponding LR-HR pairs of image patches from the HR still image, high-frequency details are transferred from the HR source to the LR video. The DCT-domain algorithm is much faster than example-based SR in spatial domain 6 because of a reduction in search dimensionality, which is a direct result of the compact and uncorrelated DCT representation. Fast searching techniques like tree-structure vector quantization 16 and coherence search1 are also key to the improved efficiency. Preliminary results on MJPEG sequence show promising result of the DCT-domain SR synthesis approach.
van Leer, Eva; Connor, Nadine P.
2012-01-01
Summary Objectives/Hypotheses There are many documented barriers to successful adherence to voice therapy. However, methods for facilitating adherence are not well understood. The purpose of this study was to determine if patient adherence could be improved by providing patients with practice support between sessions using mobile treatment videos. Methods Thirteen voice therapy participants were provided with portable media players containing videos of voice exercises exemplified by their therapists and themselves. A randomized crossover design of two conditions was used: (1) standard of care voice therapy where participants were provided with written homework descriptions; and (2) video-enhanced voice therapy where participants received a portable digital media player with clinician and self-videos. The duration of each condition was 1 week. Results Practice of voice exercises was significantly greater in the video-enhanced voice therapy condition than in the standard of care “written” condition (P < 0.05). Three aspects of participant motivation for practice-overall commitment to practice, importance of practice, and confidence in the ability to practice were also significantly greater after video-enhanced condition than after standard of care condition. Conclusion These results support the use of video examples and portable digital media players in voice therapy for individuals who are comfortable using such technology. PMID:21840169
Professional Acceptance Of Electronic Images In Radiologic Practice
NASA Astrophysics Data System (ADS)
Gitlin, Joseph N.; Curtis, David J.; Kerlin, Barbara D.; Olmsted, William W.
1983-05-01
During the past four years, a large number of radiographic images have been interpreted in both film and video modes in an effort to determine the utility of digital/analogue systems in general practice. With the cooperation of the Department of Defense, the MITRE Corporation, and several university-based radiology departments, the Public Health Service has participated in laboratory experiments and a teleradiology field trial to meet this objective. During the field trial, 30 radiologists participated in the interpretation of more than 4,000 diagnostic x-ray examinations that were performed at distant clinics, digitized, and transmitted to a medical center for interpretation on video monitors. As part of the evaluation, all of the participating radiologists and the attending physicians at the clinics were queried regarding the teleradiology system, particularly with respect to the diagnostic quality of the electronic images. The original films for each of the 4,000 examinations were read independently, and the findings and impressions from each mode were compared to identify discrepancies. In addition, a sample of 530 cases was reviewed and interpreted by a consensus panel to measure the accuracy of findings and impressions of both film and video readings. The sample has been retained in an automated archive for future study at the National Center of Devices and Radiological Health facilities in Rockville, Maryland. The studies include a comparison of diagnostic findings and impressions from 1024 x 1024 matrices with those obtained from the 512 x 512 format used in the field trial. The archive also provides a database for determining the effect of data compression techniques on diagnostic interpretations and establishing the utility of image processing algorithms. The paper will include an analysis of the final results of the field trial and preliminary findings from the ongoing studies using the archive of cases at the National Center for Devices and Radiological Health. This paper was not available at the time of printing of the Proceedings.
NASA Technical Reports Server (NTRS)
1972-01-01
The assembly drawings of the receiver unit are presented for the data compression/error correction digital test system. Equipment specifications are given for the various receiver parts, including the TV input buffer register, delta demodulator, TV sync generator, memory devices, and data storage devices.
A database for assessment of effect of lossy compression on digital mammograms
NASA Astrophysics Data System (ADS)
Wang, Jiheng; Sahiner, Berkman; Petrick, Nicholas; Pezeshk, Aria
2018-03-01
With widespread use of screening digital mammography, efficient storage of the vast amounts of data has become a challenge. While lossless image compression causes no risk to the interpretation of the data, it does not allow for high compression rates. Lossy compression and the associated higher compression ratios are therefore more desirable. The U.S. Food and Drug Administration (FDA) currently interprets the Mammography Quality Standards Act as prohibiting lossy compression of digital mammograms for primary image interpretation, image retention, or transfer to the patient or her designated recipient. Previous work has used reader studies to determine proper usage criteria for evaluating lossy image compression in mammography, and utilized different measures and metrics to characterize medical image quality. The drawback of such studies is that they rely on a threshold on compression ratio as the fundamental criterion for preserving the quality of images. However, compression ratio is not a useful indicator of image quality. On the other hand, many objective image quality metrics (IQMs) have shown excellent performance for natural image content for consumer electronic applications. In this paper, we create a new synthetic mammogram database with several unique features. We compare and characterize the impact of image compression on several clinically relevant image attributes such as perceived contrast and mass appearance for different kinds of masses. We plan to use this database to develop a new objective IQM for measuring the quality of compressed mammographic images to help determine the allowed maximum compression for different kinds of breasts and masses in terms of visual and diagnostic quality.
Hamrick, Jennifer L; Hamrick, Justin T; Lee, Jennifer K; Lee, Benjamin H; Koehler, Raymond C; Shaffner, Donald H
2014-04-14
End-tidal carbon dioxide (ETCO2) correlates with systemic blood flow and resuscitation rate during cardiopulmonary resuscitation (CPR) and may potentially direct chest compression performance. We compared ETCO2-directed chest compressions with chest compressions optimized to pediatric basic life support guidelines in an infant swine model to determine the effect on rate of return of spontaneous circulation (ROSC). Forty 2-kg piglets underwent general anesthesia, tracheostomy, placement of vascular catheters, ventricular fibrillation, and 90 seconds of no-flow before receiving 10 or 12 minutes of pediatric basic life support. In the optimized group, chest compressions were optimized by marker, video, and verbal feedback to obtain American Heart Association-recommended depth and rate. In the ETCO2-directed group, compression depth, rate, and hand position were modified to obtain a maximal ETCO2 without video or verbal feedback. After the interval of pediatric basic life support, external defibrillation and intravenous epinephrine were administered for another 10 minutes of CPR or until ROSC. Mean ETCO2 at 10 minutes of CPR was 22.7±7.8 mm Hg in the optimized group (n=20) and 28.5±7.0 mm Hg in the ETCO2-directed group (n=20; P=0.02). Despite higher ETCO2 and mean arterial pressure in the latter group, ROSC rates were similar: 13 of 20 (65%; optimized) and 14 of 20 (70%; ETCO2 directed). The best predictor of ROSC was systemic perfusion pressure. Defibrillation attempts, epinephrine doses required, and CPR-related injuries were similar between groups. The use of ETCO2-directed chest compressions is a novel guided approach to resuscitation that can be as effective as standard CPR optimized with marker, video, and verbal feedback.
JPEG and wavelet compression of ophthalmic images
NASA Astrophysics Data System (ADS)
Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.
1999-05-01
This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.
López, Carlos; Lejeune, Marylène; Escrivà, Patricia; Bosch, Ramón; Salvadó, Maria Teresa; Pons, Lluis E.; Baucells, Jordi; Cugat, Xavier; Álvaro, Tomás; Jaén, Joaquín
2008-01-01
This study investigates the effects of digital image compression on automatic quantification of immunohistochemical nuclear markers. We examined 188 images with a previously validated computer-assisted analysis system. A first group was composed of 47 images captured in TIFF format, and other three contained the same images converted from TIFF to JPEG format with 3×, 23× and 46× compression. Counts of TIFF format images were compared with the other three groups. Overall, differences in the count of the images increased with the percentage of compression. Low-complexity images (≤100 cells/field, without clusters or with small-area clusters) had small differences (<5 cells/field in 95–100% of cases) and high-complexity images showed substantial differences (<35–50 cells/field in 95–100% of cases). Compression does not compromise the accuracy of immunohistochemical nuclear marker counts obtained by computer-assisted analysis systems for digital images with low complexity and could be an efficient method for storing these images. PMID:18755997
Design of a digital compression technique for shuttle television
NASA Technical Reports Server (NTRS)
Habibi, A.; Fultz, G.
1976-01-01
The determination of the performance and hardware complexity of data compression algorithms applicable to color television signals, were studied to assess the feasibility of digital compression techniques for shuttle communications applications. For return link communications, it is shown that a nonadaptive two dimensional DPCM technique compresses the bandwidth of field-sequential color TV to about 13 MBPS and requires less than 60 watts of secondary power. For forward link communications, a facsimile coding technique is recommended which provides high resolution slow scan television on a 144 KBPS channel. The onboard decoder requires about 19 watts of secondary power.
Astrometric and Photometric Analysis of the September 2008 ATV-1 Re-Entry Event
NASA Technical Reports Server (NTRS)
Mulrooney, Mark K.; Barker, Edwin S.; Maley, Paul D.; Beaulieu, Kevin R.; Stokely, Christopher L.
2008-01-01
NASA utilized Image Intensified Video Cameras for ATV data acquisition from a jet flying at 12.8 km. Afterwards the video was digitized and then analyzed with a modified commercial software package, Image Systems Trackeye. Astrometric results were limited by saturation, plate scale, and imposed linear plate solution based on field reference stars. Time-dependent fragment angular trajectories, velocities, accelerations, and luminosities were derived in each video segment. It was evident that individual fragments behave differently. Photometric accuracy was insufficient to confidently assess correlations between luminosity and fragment spatial behavior (velocity, deceleration). Use of high resolution digital video cameras in future should remedy this shortcoming.
An inexpensive digital tape recorder suitable for neurophysiological signals.
Lamb, T D
1985-10-01
Modifications are described which convert an inexpensive 'Digital Audio Processor' (Sony PCM-701ES), together with a video cassette recorder, into a high performance digital tape recorder, with two analog channels of 16 bit resolution and DC-20 kHz bandwidth. A further modification is described which optionally provides four additional 1-bit digital channels by sacrificing the least significant four bits of one analog channel. If required two additional high quality analog channels may be obtained by use of one of the new video cassette recorders (such as the Sony SL-HF100) which incorporate a pair of FM tracks.
Pre-processing SAR image stream to facilitate compression for transport on bandwidth-limited-link
Rush, Bobby G.; Riley, Robert
2015-09-29
Pre-processing is applied to a raw VideoSAR (or similar near-video rate) product to transform the image frame sequence into a product that resembles more closely the type of product for which conventional video codecs are designed, while sufficiently maintaining utility and visual quality of the product delivered by the codec.
Study of efficient video compression algorithms for space shuttle applications
NASA Technical Reports Server (NTRS)
Poo, Z.
1975-01-01
Results are presented of a study on video data compression techniques applicable to space flight communication. This study is directed towards monochrome (black and white) picture communication with special emphasis on feasibility of hardware implementation. The primary factors for such a communication system in space flight application are: picture quality, system reliability, power comsumption, and hardware weight. In terms of hardware implementation, these are directly related to hardware complexity, effectiveness of the hardware algorithm, immunity of the source code to channel noise, and data transmission rate (or transmission bandwidth). A system is recommended, and its hardware requirement summarized. Simulations of the study were performed on the improved LIM video controller which is computer-controlled by the META-4 CPU.
2012 ARPA-E Energy Innovation Summit: Profiling Sheetak: Low Cost - Solid State Cooling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pokharna, Himanshu; Ghoshal, Uttam
The third annual ARPA-E Energy Innovation Summit was held in Washington D.C. in February, 2012. The event brought together key players from across the energy ecosystem - researchers, entrepreneurs, investors, corporate executives, and government officials - to share ideas for developing and deploying the next generation of energy technologies. A few videos were selected for showing during the Summit to attendees. These "performer videos" highlight innovative research that is ongoing and related to the main topics of the Summit's sessions. Featured in this video are David Marcus, Founder of General Compression, and Eric Ingersoll, CEO of General Compression. Himanshu Pokharna,more » Vice President of Sheetak Uttam Ghoshal, President and CEO of Sheetak.« less
Fast depth decision for HEVC inter prediction based on spatial and temporal correlation
NASA Astrophysics Data System (ADS)
Chen, Gaoxing; Liu, Zhenyu; Ikenaga, Takeshi
2016-07-01
High efficiency video coding (HEVC) is a video compression standard that outperforms the predecessor H.264/AVC by doubling the compression efficiency. To enhance the compression accuracy, the partition sizes ranging is from 4x4 to 64x64 in HEVC. However, the manifold partition sizes dramatically increase the encoding complexity. This paper proposes a fast depth decision based on spatial and temporal correlation. Spatial correlation utilize the code tree unit (CTU) Splitting information and temporal correlation utilize the motion vector predictor represented CTU in inter prediction to determine the maximum depth in each CTU. Experimental results show that the proposed method saves about 29.1% of the original processing time with 0.9% of BD-bitrate increase on average.
Performance of customized DCT quantization tables on scientific data
NASA Technical Reports Server (NTRS)
Ratnakar, Viresh; Livny, Miron
1994-01-01
We show that it is desirable to use data-specific or customized quantization tables for scaling the spatial frequency coefficients obtained using the Discrete Cosine Transform (DCT). DCT is widely used for image and video compression (MP89, PM93) but applications typically use default quantization matrices. Using actual scientific data gathered from divers sources such as spacecrafts and electron-microscopes, we show that the default compression/quality tradeoffs can be significantly improved upon by using customized tables. We also show that significant improvements are possible for the standard test images Lena and Baboon. This work is part of an effort to develop a practical scheme for optimizing quantization matrices for any given image or video stream, under any given quality or compression constraints.
Jersey number detection in sports video for athlete identification
NASA Astrophysics Data System (ADS)
Ye, Qixiang; Huang, Qingming; Jiang, Shuqiang; Liu, Yang; Gao, Wen
2005-07-01
Athlete identification is important for sport video content analysis since users often care about the video clips with their preferred athletes. In this paper, we propose a method for athlete identification by combing the segmentation, tracking and recognition procedures into a coarse-to-fine scheme for jersey number (digital characters on sport shirt) detection. Firstly, image segmentation is employed to separate the jersey number regions with its background. And size/pipe-like attributes of digital characters are used to filter out candidates. Then, a K-NN (K nearest neighbor) classifier is employed to classify a candidate into a digit in "0-9" or negative. In the recognition procedure, we use the Zernike moment features, which are invariant to rotation and scale for digital shape recognition. Synthetic training samples with different fonts are used to represent the pattern of digital characters with non-rigid deformation. Once a character candidate is detected, a SSD (smallest square distance)-based tracking procedure is started. The recognition procedure is performed every several frames in the tracking process. After tracking tens of frames, the overall recognition results are combined to determine if a candidate is a true jersey number or not by a voting procedure. Experiments on several types of sports video shows encouraging result.
O'Mara, Ben
2013-09-01
Participatory processes are effective for digital video production that promotes health and wellbeing with communities from diverse cultural and linguistic backgrounds, including migrants and refugees. Social media platforms YouTube, Vimeo, Flickr and others demonstrate potential for extending and enhancing this production approach. However, differences within and between communities in terms of their quality of participation online suggest that social media risk becoming exclusive online environments and a barrier to health and wellbeing promotion. This article examines the literature and recent research and practice in Australia to identify opportunities and challenges when using social media with communities from diverse cultural and linguistic backgrounds. It proposes a hybrid approach for digital video production that integrates 'online' and 'offline' participation and engages with the differences between migrants and refugees to support more inclusive health and wellbeing promotion using digital technology.
Realization and optimization of AES algorithm on the TMS320DM6446 based on DaVinci technology
NASA Astrophysics Data System (ADS)
Jia, Wen-bin; Xiao, Fu-hai
2013-03-01
The application of AES algorithm in the digital cinema system avoids video data to be illegal theft or malicious tampering, and solves its security problems. At the same time, in order to meet the requirements of the real-time, scene and transparent encryption of high-speed data streams of audio and video in the information security field, through the in-depth analysis of AES algorithm principle, based on the hardware platform of TMS320DM6446, with the software framework structure of DaVinci, this paper proposes the specific realization methods of AES algorithm in digital video system and its optimization solutions. The test results show digital movies encrypted by AES128 can not play normally, which ensures the security of digital movies. Through the comparison of the performance of AES128 algorithm before optimization and after, the correctness and validity of improved algorithm is verified.
Landes, Constantin A; Weichert, Frank; Geis, Philipp; Wernstedt, Katrin; Wilde, Anja; Fritsch, Helga; Wagner, Mathias
2005-08-01
This study analyses tissue-plastinated vs. celloidin-embedded large serial sections, their inherent artefacts and aptitude with common video, analog or digital photographic on-screen reproduction. Subsequent virtual 3D microanatomical reconstruction will increase our knowledge of normal and pathological microanatomy for cleft-lip-palate (clp) reconstructive surgery. Of 18 fetal (six clp, 12 control) specimens, six randomized specimens (two clp) were BiodurE12-plastinated, sawn, burnished 90 microm thick transversely (five) or frontally (one), stained with azureII/methylene blue, and counterstained with basic-fuchsin (TP-AMF). Twelve remaining specimens (four clp) were celloidin-embedded, microtome-sectioned 75 microm thick transversely (ten) or frontally (two), and stained with haematoxylin-eosin (CE-HE). Computed-planimetry gauged artefacts, structure differentiation was compared with light microscopy on video, analog and digital photography. Total artefact was 0.9% (TP-AMF) and 2.1% (CE-HE); TP-AMF showed higher colour contrast, gamut and luminance, and CE-HE more red contrast, saturation and hue (P < 0.4). All (100%) structures of interest were light microscopically discerned, 83% on video, 76% on analog photography and 98% in digital photography. Computed image analysis assessed the greatest colour contrast, gamut, luminance and saturation on video; the most detailed, colour-balanced and sharpest images were obtained with digital photography (P < 0.02). TP-AMF retained spatial oversight, covered the entire area of interest and should be combined in different specimens with CE-HE which enables more refined muscle fibre reproduction. Digital photography is preferred for on-screen analysis.
Jaferzadeh, Keyvan; Gholami, Samaneh; Moon, Inkyu
2016-12-20
In this paper, we evaluate lossless and lossy compression techniques to compress quantitative phase images of red blood cells (RBCs) obtained by an off-axis digital holographic microscopy (DHM). The RBC phase images are numerically reconstructed from their digital holograms and are stored in 16-bit unsigned integer format. In the case of lossless compression, predictive coding of JPEG lossless (JPEG-LS), JPEG2000, and JP3D are evaluated, and compression ratio (CR) and complexity (compression time) are compared against each other. It turns out that JP2k can outperform other methods by having the best CR. In the lossy case, JP2k and JP3D with different CRs are examined. Because some data is lost in a lossy way, the degradation level is measured by comparing different morphological and biochemical parameters of RBC before and after compression. Morphological parameters are volume, surface area, RBC diameter, sphericity index, and the biochemical cell parameter is mean corpuscular hemoglobin (MCH). Experimental results show that JP2k outperforms JP3D not only in terms of mean square error (MSE) when CR increases, but also in compression time in the lossy compression way. In addition, our compression results with both algorithms demonstrate that with high CR values the three-dimensional profile of RBC can be preserved and morphological and biochemical parameters can still be within the range of reported values.
Compression After Impact on Honeycomb Core Sandwich Panels With Thin Facesheets. Part 1; Experiments
NASA Technical Reports Server (NTRS)
McQuigg, Thomas D.; Kapania, Rakesh K.; Scotti, Stephen J.; Walker, Sandra P.
2012-01-01
A two part research study has been completed on the topic of compression after impact (CAI) of thin facesheet honeycomb core sandwich panels. The research has focused on both experiments and analysis in an effort to establish and validate a new understanding of the damage tolerance of these materials. Part one, the subject of the current paper, is focused on the experimental testing. Of interest are sandwich panels, with aerospace applications, which consist of very thin, woven S2-fiberglass (with MTM45-1 epoxy) facesheets adhered to a Nomex honeycomb core. Two sets of specimens, which were identical with the exception of the density of the honeycomb core, were tested. Static indentation and low velocity impact using a drop tower are used to study damage formation in these materials. A series of highly instrumented CAI tests was then completed. New techniques used to observe CAI response and failure include high speed video photography, as well as digital image correlation (DIC) for full-field deformation measurement. Two CAI failure modes, indentation propagation, and crack propagation, were observed. From the results, it can be concluded that the CAI failure mode of these panels depends solely on the honeycomb core density.
High-quality lossy compression: current and future trends
NASA Astrophysics Data System (ADS)
McLaughlin, Steven W.
1995-01-01
This paper is concerned with current and future trends in the lossy compression of real sources such as imagery, video, speech and music. We put all lossy compression schemes into common framework where each can be characterized in terms of three well-defined advantages: cell shape, region shape and memory advantages. We concentrate on image compression and discuss how new entropy constrained trellis-based compressors achieve cell- shape, region-shape and memory gain resulting in high fidelity and high compression.
Balloon-borne video cassette recorders for digital data storage
NASA Technical Reports Server (NTRS)
Althouse, W. E.; Cook, W. R.
1985-01-01
A high speed, high capacity digital data storage system was developed for a new balloon-borne gamma-ray telescope. The system incorporates economical consumer products: the portable video cassette recorder (VCR) and a relatively newer item - the digital audio processor. The in-flight recording system employs eight VCRs and will provide a continuous data storage rate of 1.4 megabits/sec throughout a 40 hour balloon flight. Data storage capacity is 25 gigabytes and power consumption is only 10 watts.
A robust coding scheme for packet video
NASA Technical Reports Server (NTRS)
Chen, Y. C.; Sayood, Khalid; Nelson, D. J.
1991-01-01
We present a layered packet video coding algorithm based on a progressive transmission scheme. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.
A robust coding scheme for packet video
NASA Technical Reports Server (NTRS)
Chen, Yun-Chung; Sayood, Khalid; Nelson, Don J.
1992-01-01
A layered packet video coding algorithm based on a progressive transmission scheme is presented. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.
JPEG XS-based frame buffer compression inside HEVC for power-aware video compression
NASA Astrophysics Data System (ADS)
Willème, Alexandre; Descampe, Antonin; Rouvroy, Gaël.; Pellegrin, Pascal; Macq, Benoit
2017-09-01
With the emergence of Ultra-High Definition video, reference frame buffers (FBs) inside HEVC-like encoders and decoders have to sustain huge bandwidth. The power consumed by these external memory accesses accounts for a significant share of the codec's total consumption. This paper describes a solution to significantly decrease the FB's bandwidth, making HEVC encoder more suitable for use in power-aware applications. The proposed prototype consists in integrating an embedded lightweight, low-latency and visually lossless codec at the FB interface inside HEVC in order to store each reference frame as several compressed bitstreams. As opposed to previous works, our solution compresses large picture areas (ranging from a CTU to a frame stripe) independently in order to better exploit the spatial redundancy found in the reference frame. This work investigates two data reuse schemes namely Level-C and Level-D. Our approach is made possible thanks to simplified motion estimation mechanisms further reducing the FB's bandwidth and inducing very low quality degradation. In this work, we integrated JPEG XS, the upcoming standard for lightweight low-latency video compression, inside HEVC. In practice, the proposed implementation is based on HM 16.8 and on XSM 1.1.2 (JPEG XS Test Model). Through this paper, the architecture of our HEVC with JPEG XS-based frame buffer compression is described. Then its performance is compared to HM encoder. Compared to previous works, our prototype provides significant external memory bandwidth reduction. Depending on the reuse scheme, one can expect bandwidth and FB size reduction ranging from 50% to 83.3% without significant quality degradation.
Beskind, Daniel L; Stolz, Uwe; Thiede, Rebecca; Hoyer, Riley; Robertson, Whitney; Brown, Jeffrey; Ludgate, Melissa; Tiutan, Timothy; Shane, Romy; McMorrow, Deven; Pleasants, Michael; Kern, Karl B; Panchal, Ashish R
2017-09-01
CPR training at mass gathering events is an important part of health initiatives to improve cardiac arrest survival. However, it is unclear whether training lay bystanders using an ultra-brief video at a mass gathering event improves CPR quality and responsiveness. To determine if showing a chest-compression only (CCO) Ultra-Brief Video (UBV) at a mass gathering event is effective in teaching lay bystanders CCO-CPR. Prospective control trial in adults (age >18) who attended either a women's University of Arizona or a men's Phoenix Suns basketball game. Participants were evaluated using a standardized cardiac arrest scenario with Laerdal Skillreporter™ mannequins. CPR responsiveness (calling 911, time to calling 911, starting compressions within two minutes) and quality (compression rate, depth, hands-off time) were assessed for participants and data collected at Baseline and Post-intervention. Different participants were tested before and after the exposure of the UBV. Data were analyzed via the intention to treat principle using logistic regression for binary outcomes and median regression for continuous outcomes, controlling for clustering by venue. A total of 96 people were consented (Baseline=45; Post intervention=51). CPR responsiveness post intervention improved with faster time to calling 911 (s) and time to starting compressions (sec). Likewise, CPR quality improved with deeper compressions and improved hands-off time. Showing a UBV at a mass gathering sporting event is associated with improved CPR responsiveness and performance for lay bystanders. This data provides further support for the use of mass media interventions. Copyright © 2017 Elsevier B.V. All rights reserved.
Tolbert, Jeremy R; Kabali, Pratik; Brar, Simeranjit; Mukhopadhyay, Saibal
2009-01-01
We present a digital system for adaptive data compression for low power wireless transmission of Electroencephalography (EEG) data. The proposed system acts as a base-band processor between the EEG analog-to-digital front-end and RF transceiver. It performs a real-time accuracy energy trade-off for multi-channel EEG signal transmission by controlling the volume of transmitted data. We propose a multi-core digital signal processor for on-chip processing of EEG signals, to detect signal information of each channel and perform real-time adaptive compression. Our analysis shows that the proposed approach can provide significant savings in transmitter power with minimal impact on the overall signal accuracy.
Voss with video camera in Service Module
2001-04-08
ISS002-E-5329 (08 April 2001) --- Astronaut James S. Voss, Expedition Two flight engineer, sets up a video camera on a mounting bracket in the Zvezda / Service Module of the International Space Station (ISS). A 35mm camera and a digital still camera are also visible nearby. This image was recorded with a digital still camera.
Indigenous Digital Storytelling in Video: Witnessing with Alma Desjarlais
ERIC Educational Resources Information Center
Iseke, Judy M.
2011-01-01
Indigenous digital storytelling in video is a way of witnessing the stories of Indigenous communities and Elders, including what has happened and is happening in the lives and work of Indigenous peoples. Witnessing includes acts of remembrance in which we look back to reinterpret and recreate our relationship to the past in order to understand the…
ERIC Educational Resources Information Center
Niess, Margaret L.; Gillow-Wiles, Henry
2014-01-01
This qualitative cross-case study explores the influence of a designed learning trajectory on transforming teachers' technological pedagogical content knowledge (TPACK) for teaching with digital image and video technologies. The TPACK Learning Trajectory embeds tasks with specific instructional strategies within a social metacognitive…
ERIC Educational Resources Information Center
Ivashkevich, Olga; Shoppell, Samantha
2013-01-01
The authors discuss their participant observation study with the 10-year-old boy and 8-year-old girl who collaborated on making digital videos at home. Major themes that emerged from this research include appropriation of popular culture texts, parody, gender play, and managing self-representations. These themes highlight the benefits of video…
ERIC Educational Resources Information Center
de Araujo, Zandra; Otten, Samuel; Birisci, Salih
2017-01-01
The rise of digital resources has had profound effects on mathematics curricula and there has been a concurrent increase in teachers flipping their instruction--that is, assigning instructional videos or multimedia for students to watch as homework and completing problem or exercise sets in class rather than vice versa. These changes have created…
Distance Learning Using Digital Fiber Optics: Applications, Technologies, and Benefits.
ERIC Educational Resources Information Center
Currer, Joanne M.
Distance learning provides special or advanced classes in rural schools where declining population has led to decreased funding and fewer classes. With full-motion video using digital fiber, two or more sites are connected into a two-way, full-motion, video conference. The teacher can see and hear the students, and the students can see and hear…
Readers, Players, and Watchers: EFL Students' Vocabulary Acquisition through Digital Video Games
ERIC Educational Resources Information Center
Ebrahimzadeh, Mohsen
2017-01-01
The present study investigated vocabulary acquisition through a commercial digital video game compared to a traditional pencil-and-paper treatment. Chosen through cluster sampling, 241 male high school students (age 12-18) participated in the study. They were randomly assigned to one of the following groups. The first group, called Readers,…
Digital Video as Research Practice: Methodology for the Millennium
ERIC Educational Resources Information Center
Shrum, Wesley; Duque, Ricardo; Brown, Timothy
2005-01-01
This essay has its origin in a project on the globalization of science that rediscovered the wisdom of past research practices through the technology of the future. The main argument of this essay is that a convergence of digital video technologies with practices of social surveillance portends a methodological shift towards a new variety of…
An Evaluation of the Informedia Digital Video Library System at the Open University.
ERIC Educational Resources Information Center
Kukulska-Hulme, Agnes; Van der Zwan, Robert; DiPaolo, Terry; Evers, Vanessa; Clarke, Sarah
1999-01-01
Reports on an Open University evaluation study of the Informedia Digital Video Library System developed at Carnegie Mellon University (CMU). Findings indicate that there is definite potential for using the system, provided that certain modifications can be made. Results also confirm findings of the Informedia team at CMU that the content of video…
ERIC Educational Resources Information Center
Palmer, Stuart
2007-01-01
A recent television documentary on the Columbia space shuttle disaster was converted to streaming digital video format for educational use by on- and off-campus students in an engineering management study unit examining issues in professional engineering ethics. An evaluation was conducted to assess the effectiveness of this new resource. Use of…
Redefining Book Reviews for the Digital Age
ERIC Educational Resources Information Center
Butler, Deirdre; Leahy, Margaret; McCormack, Ciaran
2010-01-01
This paper describes the results of a pilot study conducted in Ireland to examine the effectiveness of an online book review project. The project focused on the production of book reviews by primary school children in the form of digital video. The videos created were uploaded to a password protected website, which was available to the schools…
VENI, video, VICI: The merging of computer and video technologies
NASA Technical Reports Server (NTRS)
Horowitz, Jay G.
1993-01-01
The topics covered include the following: High Definition Television (HDTV) milestones; visual information bandwidth; television frequency allocation and bandwidth; horizontal scanning; workstation RGB color domain; NTSC color domain; American HDTV time-table; HDTV image size; digital HDTV hierarchy; task force on digital image architecture; open architecture model; future displays; and the ULTIMATE imaging system.
Making Meaning on the Screen: Digital Video Production about the Dominican Republic
ERIC Educational Resources Information Center
Ranker, Jason
2008-01-01
As part of an inquiry and digital documentary video project, two 12-year-old students studied the Dominican Republic. Over the course of their research, the boys (one of whose parents moved from the Dominican Republic) focused their project on two aspects of the culture of the Dominican Republic: contemporary music (bachata and merengue) and…
Getting the Bigger Picture With Digital Surveillance
NASA Technical Reports Server (NTRS)
2002-01-01
Through a Space Act Agreement, Diebold, Inc., acquired the exclusive rights to Glenn Research Center's patented video observation technology, originally designed to accelerate video image analysis for various ongoing and future space applications. Diebold implemented the technology into its AccuTrack digital, color video recorder, a state-of- the-art surveillance product that uses motion detection for around-the- clock monitoring. AccuTrack captures digitally signed images and transaction data in real-time. This process replaces the onerous tasks involved in operating a VCR-based surveillance system, and subsequently eliminates the need for central viewing and tape archiving locations altogether. AccuTrack can monitor an entire bank facility, including four automated teller machines, multiple teller lines, and new account areas, all from one central location.
NASA Astrophysics Data System (ADS)
Zhao, Haiwu; Wang, Guozhong; Hou, Gang
2005-07-01
AVS is a new digital audio-video coding standard established by China. AVS will be used in digital TV broadcasting and next general optical disk. AVS adopted many digital audio-video coding techniques developed by Chinese company and universities in recent years, it has very low complexity compared to H.264, and AVS will charge very low royalty fee through one-step license including all AVS tools. So AVS is a good and competitive candidate for Chinese DTV and next generation optical disk. In addition, Chinese government has published a plan for satellite TV signal directly to home(DTH) and a telecommunication satellite named as SINO 2 will be launched in 2006. AVS will be also one of the best hopeful candidates of audio-video coding standard on satellite signal transmission.
Sandford, M.T. II; Handel, T.G.; Bradley, J.N.
1998-07-07
A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.
Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.
1998-01-01
A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.
Stream On: Video Servers in the Real World.
ERIC Educational Resources Information Center
Tristram, Claire
1995-01-01
Despite plans for corporate training networks, digital ad-insertion systems, hotel video-on-demand, and interactive television, only small scale video networks presently work. Four case studies examine the design and implementation decisions for different markets: corporate; advertising; hotel; and commercial video via cable, satellite or…
Applications of ENF criterion in forensic audio, video, computer and telecommunication analysis.
Grigoras, Catalin
2007-04-11
This article reports on the electric network frequency criterion as a means of assessing the integrity of digital audio/video evidence and forensic IT and telecommunication analysis. A brief description is given to different ENF types and phenomena that determine ENF variations. In most situations, to reach a non-authenticity opinion, the visual inspection of spectrograms and comparison with an ENF database are enough. A more detailed investigation, in the time domain, requires short time windows measurements and analyses. The stability of the ENF over geographical distances has been established by comparison of synchronized recordings made at different locations on the same network. Real cases are presented, in which the ENF criterion was used to investigate audio and video files created with secret surveillance systems, a digitized audio/video recording and a TV broadcasted reportage. By applying the ENF Criterion in forensic audio/video analysis, one can determine whether and where a digital recording has been edited, establish whether it was made at the time claimed, and identify the time and date of the registering operation.
A novel multiple description scalable coding scheme for mobile wireless video transmission
NASA Astrophysics Data System (ADS)
Zheng, Haifeng; Yu, Lun; Chen, Chang Wen
2005-03-01
We proposed in this paper a novel multiple description scalable coding (MDSC) scheme based on in-band motion compensation temporal filtering (IBMCTF) technique in order to achieve high video coding performance and robust video transmission. The input video sequence is first split into equal-sized groups of frames (GOFs). Within a GOF, each frame is hierarchically decomposed by discrete wavelet transform. Since there is a direct relationship between wavelet coefficients and what they represent in the image content after wavelet decomposition, we are able to reorganize the spatial orientation trees to generate multiple bit-streams and employed SPIHT algorithm to achieve high coding efficiency. We have shown that multiple bit-stream transmission is very effective in combating error propagation in both Internet video streaming and mobile wireless video. Furthermore, we adopt the IBMCTF scheme to remove the redundancy for inter-frames along the temporal direction using motion compensated temporal filtering, thus high coding performance and flexible scalability can be provided in this scheme. In order to make compressed video resilient to channel error and to guarantee robust video transmission over mobile wireless channels, we add redundancy to each bit-stream and apply error concealment strategy for lost motion vectors. Unlike traditional multiple description schemes, the integration of these techniques enable us to generate more than two bit-streams that may be more appropriate for multiple antenna transmission of compressed video. Simulate results on standard video sequences have shown that the proposed scheme provides flexible tradeoff between coding efficiency and error resilience.
Streaming Video--The Wave of the Video Future!
ERIC Educational Resources Information Center
Brown, Laura
2004-01-01
Videos and DVDs give the teachers more flexibility than slide projectors, filmstrips, and 16mm films but teachers and students are excited about a new technology called streaming. Streaming allows the educators to view videos on demand via the Internet, which works through the transfer of digital media like video, and voice data that is received…
Bringing the Digital Camera to the Physics Lab
ERIC Educational Resources Information Center
Rossi, M.; Gratton, L. M.; Oss, S.
2013-01-01
We discuss how compressed images created by modern digital cameras can lead to even severe problems in the quantitative analysis of experiments based on such images. Difficulties result from the nonlinear treatment of lighting intensity values stored in compressed files. To overcome such troubles, one has to adopt noncompressed, native formats, as…
Syndrome source coding and its universal generalization
NASA Technical Reports Server (NTRS)
Ancheta, T. C., Jr.
1975-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A universal generalization of syndrome-source-coding is formulated which provides robustly-effective, distortionless, coding of source ensembles.
ERIC Educational Resources Information Center
Zahn, Carmen; Schaeffeler, Norbert; Giel, Katrin Elisabeth; Wessel, Daniel; Thiel, Ansgar; Zipfel, Stephan; Hesse, Friedrich W.
2014-01-01
Mobile phones and advanced web-based video tools have pushed forward new paradigms for using video in education: Today, students can readily create and broadcast their own digital videos for others and create entirely new patterns of video-based information structures for modern online-communities and multimedia environments. This paradigm shift…
A new display stream compression standard under development in VESA
NASA Astrophysics Data System (ADS)
Jacobson, Natan; Thirumalai, Vijayaraghavan; Joshi, Rajan; Goel, James
2017-09-01
The Advanced Display Stream Compression (ADSC) codec project is in development in response to a call for technologies from the Video Electronics Standards Association (VESA). This codec targets visually lossless compression of display streams at a high compression rate (typically 6 bits/pixel) for mobile/VR/HDR applications. Functionality of the ADSC codec is described in this paper, and subjective trials results are provided using the ISO 29170-2 testing protocol.