Overview of the H.264/AVC video coding standard
NASA Astrophysics Data System (ADS)
Luthra, Ajay; Topiwala, Pankaj N.
2003-11-01
H.264/MPEG-4 AVC is the latest coding standard jointly developed by the Video Coding Experts Group (VCEG) of ITU-T and Moving Picture Experts Group (MPEG) of ISO/IEC. It uses state of the art coding tools and provides enhanced coding efficiency for a wide range of applications including video telephony, video conferencing, TV, storage (DVD and/or hard disk based), streaming video, digital video creation, digital cinema and others. In this paper an overview of this standard is provided. Some comparisons with the existing standards, MPEG-2 and MPEG-4 Part 2, are also provided.
NASA Astrophysics Data System (ADS)
Sullivan, Gary J.; Topiwala, Pankaj N.; Luthra, Ajay
2004-11-01
H.264/MPEG-4 AVC is the latest international video coding standard. It was jointly developed by the Video Coding Experts Group (VCEG) of the ITU-T and the Moving Picture Experts Group (MPEG) of ISO/IEC. It uses state-of-the-art coding tools and provides enhanced coding efficiency for a wide range of applications, including video telephony, video conferencing, TV, storage (DVD and/or hard disk based, especially high-definition DVD), streaming video, digital video authoring, digital cinema, and many others. The work on a new set of extensions to this standard has recently been completed. These extensions, known as the Fidelity Range Extensions (FRExt), provide a number of enhanced capabilities relative to the base specification as approved in the Spring of 2003. In this paper, an overview of this standard is provided, including the highlights of the capabilities of the new FRExt features. Some comparisons with the existing MPEG-2 and MPEG-4 Part 2 standards are also provided.
Performance evaluation of MPEG internet video coding
NASA Astrophysics Data System (ADS)
Luo, Jiajia; Wang, Ronggang; Fan, Kui; Wang, Zhenyu; Li, Ge; Wang, Wenmin
2016-09-01
Internet Video Coding (IVC) has been developed in MPEG by combining well-known existing technology elements and new coding tools with royalty-free declarations. In June 2015, IVC project was approved as ISO/IEC 14496-33 (MPEG- 4 Internet Video Coding). It is believed that this standard can be highly beneficial for video services in the Internet domain. This paper evaluates the objective and subjective performances of IVC by comparing it against Web Video Coding (WVC), Video Coding for Browsers (VCB) and AVC High Profile. Experimental results show that IVC's compression performance is approximately equal to that of the AVC High Profile for typical operational settings, both for streaming and low-delay applications, and is better than WVC and VCB.
Development of MPEG standards for 3D and free viewpoint video
NASA Astrophysics Data System (ADS)
Smolic, Aljoscha; Kimata, Hideaki; Vetro, Anthony
2005-11-01
An overview of 3D and free viewpoint video is given in this paper with special focus on related standardization activities in MPEG. Free viewpoint video allows the user to freely navigate within real world visual scenes, as known from virtual worlds in computer graphics. Suitable 3D scene representation formats are classified and the processing chain is explained. Examples are shown for image-based and model-based free viewpoint video systems, highlighting standards conform realization using MPEG-4. Then the principles of 3D video are introduced providing the user with a 3D depth impression of the observed scene. Example systems are described again focusing on their realization based on MPEG-4. Finally multi-view video coding is described as a key component for 3D and free viewpoint video systems. MPEG is currently working on a new standard for multi-view video coding. The conclusion is that the necessary technology including standard media formats for 3D and free viewpoint is available or will be available in the near future, and that there is a clear demand from industry and user side for such applications. 3DTV at home and free viewpoint video on DVD will be available soon, and will create huge new markets.
Compression of stereoscopic video using MPEG-2
NASA Astrophysics Data System (ADS)
Puri, A.; Kollarits, Richard V.; Haskell, Barry G.
1995-10-01
Many current as well as emerging applications in areas of entertainment, remote operations, manufacturing industry and medicine can benefit from the depth perception offered by stereoscopic video systems which employ two views of a scene imaged under the constraints imposed by human visual system. Among the many challenges to be overcome for practical realization and widespread use of 3D/stereoscopic systems are good 3D displays and efficient techniques for digital compression of enormous amounts of data while maintaining compatibility with normal video decoding and display systems. After a brief introduction to the basics of 3D/stereo including issues of depth perception, stereoscopic 3D displays and terminology in stereoscopic imaging and display, we present an overview of tools in the MPEG-2 video standard that are relevant to our discussion on compression of stereoscopic video, which is the main topic of this paper. Next, we outilne the various approaches for compression of stereoscopic video and then focus on compatible stereoscopic video coding using MPEG-2 Temporal scalability concepts. Compatible coding employing two different types of prediction structures become potentially possible, disparity compensated prediction and combined disparity and motion compensated predictions. To further improve coding performance and display quality, preprocessing for reducing mismatch between the two views forming stereoscopic video is considered. Results of simulations performed on stereoscopic video of normal TV resolution are then reported comparing the performance of two prediction structures with the simulcast solution. It is found that combined disparity and motion compensated prediction offers the best performance. Results indicate that compression of both views of stereoscopic video of normal TV resolution appears feasible in a total of 6 to 8 Mbit/s. We then discuss regarding multi-viewpoint video, a generalization of stereoscopic video. Finally, we describe ongoing efforts within MPEG-2 to define a profile for stereoscopic video coding, as well as, the promise of MPEG-4 in addressing coding of multi-viewpoint video.
Compression of stereoscopic video using MPEG-2
NASA Astrophysics Data System (ADS)
Puri, Atul; Kollarits, Richard V.; Haskell, Barry G.
1995-12-01
Many current as well as emerging applications in areas of entertainment, remote operations, manufacturing industry and medicine can benefit from the depth perception offered by stereoscopic video systems which employ two views of a scene imaged under the constraints imposed by human visual system. Among the many challenges to be overcome for practical realization and widespread use of 3D/stereoscopic systems are good 3D displays and efficient techniques for digital compression of enormous amounts of data while maintaining compatibility with normal video decoding and display systems. After a brief introduction to the basics of 3D/stereo including issues of depth perception, stereoscopic 3D displays and terminology in stereoscopic imaging and display, we present an overview of tools in the MPEG-2 video standard that are relevant to our discussion on compression of stereoscopic video, which is the main topic of this paper. Next, we outline the various approaches for compression of stereoscopic video and then focus on compatible stereoscopic video coding using MPEG-2 Temporal scalability concepts. Compatible coding employing two different types of prediction structures become potentially possible, disparity compensated prediction and combined disparity and motion compensated predictions. To further improve coding performance and display quality, preprocessing for reducing mismatch between the two views forming stereoscopic video is considered. Results of simulations performed on stereoscopic video of normal TV resolution are then reported comparing the performance of two prediction structures with the simulcast solution. It is found that combined disparity and motion compensated prediction offers the best performance. Results indicate that compression of both views of stereoscopic video of normal TV resolution appears feasible in a total of 6 to 8 Mbit/s. We then discuss regarding multi-viewpoint video, a generalization of stereoscopic video. Finally, we describe ongoing efforts within MPEG-2 to define a profile for stereoscopic video coding, as well as, the promise of MPEG-4 in addressing coding of multi-viewpoint video.
NASA Astrophysics Data System (ADS)
Grois, Dan; Marpe, Detlev; Nguyen, Tung; Hadar, Ofer
2014-09-01
The popularity of low-delay video applications dramatically increased over the last years due to a rising demand for realtime video content (such as video conferencing or video surveillance), and also due to the increasing availability of relatively inexpensive heterogeneous devices (such as smartphones and tablets). To this end, this work presents a comparative assessment of the two latest video coding standards: H.265/MPEG-HEVC (High-Efficiency Video Coding), H.264/MPEG-AVC (Advanced Video Coding), and also of the VP9 proprietary video coding scheme. For evaluating H.264/MPEG-AVC, an open-source x264 encoder was selected, which has a multi-pass encoding mode, similarly to VP9. According to experimental results, which were obtained by using similar low-delay configurations for all three examined representative encoders, it was observed that H.265/MPEG-HEVC provides significant average bit-rate savings of 32.5%, and 40.8%, relative to VP9 and x264 for the 1-pass encoding, and average bit-rate savings of 32.6%, and 42.2% for the 2-pass encoding, respectively. On the other hand, compared to the x264 encoder, typical low-delay encoding times of the VP9 encoder, are about 2,000 times higher for the 1-pass encoding, and are about 400 times higher for the 2-pass encoding.
Compression performance comparison in low delay real-time video for mobile applications
NASA Astrophysics Data System (ADS)
Bivolarski, Lazar
2012-10-01
This article compares the performance of several current video coding standards in the conditions of low-delay real-time in a resource constrained environment. The comparison is performed using the same content and the metrics and mix of objective and perceptual quality metrics. The metrics results in different coding schemes are analyzed from a point of view of user perception and quality of service. Multiple standards are compared MPEG-2, MPEG4 and MPEG-AVC and well and H.263. The metrics used in the comparison include SSIM, VQM and DVQ. Subjective evaluation and quality of service are discussed from a point of view of perceptual metrics and their incorporation in the coding scheme development process. The performance and the correlation of results are presented as a predictor of the performance of video compression schemes.
Seeling, Patrick; Reisslein, Martin
2014-01-01
Video encoding for multimedia services over communication networks has significantly advanced in recent years with the development of the highly efficient and flexible H.264/AVC video coding standard and its SVC extension. The emerging H.265/HEVC video coding standard as well as 3D video coding further advance video coding for multimedia communications. This paper first gives an overview of these new video coding standards and then examines their implications for multimedia communications by studying the traffic characteristics of long videos encoded with the new coding standards. We review video coding advances from MPEG-2 and MPEG-4 Part 2 to H.264/AVC and its SVC and MVC extensions as well as H.265/HEVC. For single-layer (nonscalable) video, we compare H.265/HEVC and H.264/AVC in terms of video traffic and statistical multiplexing characteristics. Our study is the first to examine the H.265/HEVC traffic variability for long videos. We also illustrate the video traffic characteristics and statistical multiplexing of scalable video encoded with the SVC extension of H.264/AVC as well as 3D video encoded with the MVC extension of H.264/AVC.
2014-01-01
Video encoding for multimedia services over communication networks has significantly advanced in recent years with the development of the highly efficient and flexible H.264/AVC video coding standard and its SVC extension. The emerging H.265/HEVC video coding standard as well as 3D video coding further advance video coding for multimedia communications. This paper first gives an overview of these new video coding standards and then examines their implications for multimedia communications by studying the traffic characteristics of long videos encoded with the new coding standards. We review video coding advances from MPEG-2 and MPEG-4 Part 2 to H.264/AVC and its SVC and MVC extensions as well as H.265/HEVC. For single-layer (nonscalable) video, we compare H.265/HEVC and H.264/AVC in terms of video traffic and statistical multiplexing characteristics. Our study is the first to examine the H.265/HEVC traffic variability for long videos. We also illustrate the video traffic characteristics and statistical multiplexing of scalable video encoded with the SVC extension of H.264/AVC as well as 3D video encoded with the MVC extension of H.264/AVC. PMID:24701145
Chroma sampling and modulation techniques in high dynamic range video coding
NASA Astrophysics Data System (ADS)
Dai, Wei; Krishnan, Madhu; Topiwala, Pankaj
2015-09-01
High Dynamic Range and Wide Color Gamut (HDR/WCG) Video Coding is an area of intense research interest in the engineering community, for potential near-term deployment in the marketplace. HDR greatly enhances the dynamic range of video content (up to 10,000 nits), as well as broadens the chroma representation (BT.2020). The resulting content offers new challenges in its coding and transmission. The Moving Picture Experts Group (MPEG) of the International Standards Organization (ISO) is currently exploring coding efficiency and/or the functionality enhancements of the recently developed HEVC video standard for HDR and WCG content. FastVDO has developed an advanced approach to coding HDR video, based on splitting the HDR signal into a smoothed luminance (SL) signal, and an associated base signal (B). Both signals are then chroma downsampled to YFbFr 4:2:0 signals, using advanced resampling filters, and coded using the Main10 High Efficiency Video Coding (HEVC) standard, which has been developed jointly by ISO/IEC MPEG and ITU-T WP3/16 (VCEG). Our proposal offers both efficient coding, and backwards compatibility with the existing HEVC Main10 Profile. That is, an existing Main10 decoder can produce a viewable standard dynamic range video, suitable for existing screens. Subjective tests show visible improvement over the anchors. Objective tests show a sizable gain of over 25% in PSNR (RGB domain) on average, for a key set of test clips selected by the ISO/MPEG committee.
Scalable Video Transmission Over Multi-Rate Multiple Access Channels
2007-06-01
Rate - compatible punctured convolutional codes (RCPC codes ) and their ap- plications,” IEEE...source encoded using the MPEG-4 video codec. The source encoded bitstream is then channel encoded with Rate Compatible Punctured Convolutional (RCPC...Clark, and J. M. Geist, “ Punctured convolutional codes or rate (n-1)/n and simplified maximum likelihood decoding,” IEEE Transactions on
ISO-IEC MPEG-2 software video codec
NASA Astrophysics Data System (ADS)
Eckart, Stefan; Fogg, Chad E.
1995-04-01
Part 5 of the International Standard ISO/IEC 13818 `Generic Coding of Moving Pictures and Associated Audio' (MPEG-2) is a Technical Report, a sample software implementation of the procedures in parts 1, 2 and 3 of the standard (systems, video, and audio). This paper focuses on the video software, which gives an example of a fully compliant implementation of the standard and of a good video quality encoder, and serves as a tool for compliance testing. The implementation and some of the development aspects of the codec are described. The encoder is based on Test Model 5 (TM5), one of the best, published, non-proprietary coding models, which was used during MPEG-2 collaborative stage to evaluate proposed algorithms and to verify the syntax. The most important part of the Test Model is controlling the quantization parameter based on the image content and bit rate constraints under both signal-to-noise and psycho-optical aspects. The decoder has been successfully tested for compliance with the MPEG-2 standard, using the ISO/IEC MPEG verification and compliance bitstream test suites as stimuli.
A Secure and Robust Object-Based Video Authentication System
NASA Astrophysics Data System (ADS)
He, Dajun; Sun, Qibin; Tian, Qi
2004-12-01
An object-based video authentication system, which combines watermarking, error correction coding (ECC), and digital signature techniques, is presented for protecting the authenticity between video objects and their associated backgrounds. In this system, a set of angular radial transformation (ART) coefficients is selected as the feature to represent the video object and the background, respectively. ECC and cryptographic hashing are applied to those selected coefficients to generate the robust authentication watermark. This content-based, semifragile watermark is then embedded into the objects frame by frame before MPEG4 coding. In watermark embedding and extraction, groups of discrete Fourier transform (DFT) coefficients are randomly selected, and their energy relationships are employed to hide and extract the watermark. The experimental results demonstrate that our system is robust to MPEG4 compression, object segmentation errors, and some common object-based video processing such as object translation, rotation, and scaling while securely preventing malicious object modifications. The proposed solution can be further incorporated into public key infrastructure (PKI).
Fast image interpolation for motion estimation using graphics hardware
NASA Astrophysics Data System (ADS)
Kelly, Francis; Kokaram, Anil
2004-05-01
Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.
Multiformat decoder for a DSP-based IP set-top box
NASA Astrophysics Data System (ADS)
Pescador, F.; Garrido, M. J.; Sanz, C.; Juárez, E.; Samper, D.; Antoniello, R.
2007-05-01
Internet Protocol Set-Top Boxes (IP STBs) based on single-processor architectures have been recently introduced in the market. In this paper, the implementation of an MPEG-4 SP/ASP video decoder for a multi-format IP STB based on a TMS320DM641 DSP is presented. An initial decoder for PC platform was fully tested and ported to the DSP. Using this code an optimization process was started achieving a 90% speedup. This process allows real-time MPEG-4 SP/ASP decoding. The MPEG-4 decoder has been integrated in an IP STB and tested in a real environment using DVD movies and TV channels with excellent results.
NASA Astrophysics Data System (ADS)
Kim, Seong-Whan; Suthaharan, Shan; Lee, Heung-Kyu; Rao, K. R.
2001-01-01
Quality of Service (QoS)-guarantee in real-time communication for multimedia applications is significantly important. An architectural framework for multimedia networks based on substreams or flows is effectively exploited for combining source and channel coding for multimedia data. But the existing frame by frame approach which includes Moving Pictures Expert Group (MPEG) cannot be neglected because it is a standard. In this paper, first, we designed an MPEG transcoder which converts an MPEG coded stream into variable rate packet sequences to be used for our joint source/channel coding (JSCC) scheme. Second, we designed a classification scheme to partition the packet stream into multiple substreams which have their own QoS requirements. Finally, we designed a management (reservation and scheduling) scheme for substreams to support better perceptual video quality such as the bound of end-to-end jitter. We have shown that our JSCC scheme is better than two other two popular techniques by simulation and real video experiments on the TCP/IP environment.
Pattern-based integer sample motion search strategies in the context of HEVC
NASA Astrophysics Data System (ADS)
Maier, Georg; Bross, Benjamin; Grois, Dan; Marpe, Detlev; Schwarz, Heiko; Veltkamp, Remco C.; Wiegand, Thomas
2015-09-01
The H.265/MPEG-H High Efficiency Video Coding (HEVC) standard provides a significant increase in coding efficiency compared to its predecessor, the H.264/MPEG-4 Advanced Video Coding (AVC) standard, which however comes at the cost of a high computational burden for a compliant encoder. Motion estimation (ME), which is a part of the inter-picture prediction process, typically consumes a high amount of computational resources, while significantly increasing the coding efficiency. In spite of the fact that both H.265/MPEG-H HEVC and H.264/MPEG-4 AVC standards allow processing motion information on a fractional sample level, the motion search algorithms based on the integer sample level remain to be an integral part of ME. In this paper, a flexible integer sample ME framework is proposed, thereby allowing to trade off significant reduction of ME computation time versus coding efficiency penalty in terms of bit rate overhead. As a result, through extensive experimentation, an integer sample ME algorithm that provides a good trade-off is derived, incorporating a combination and optimization of known predictive, pattern-based and early termination techniques. The proposed ME framework is implemented on a basis of the HEVC Test Model (HM) reference software, further being compared to the state-of-the-art fast search algorithm, which is a native part of HM. It is observed that for high resolution sequences, the integer sample ME process can be speed-up by factors varying from 3.2 to 7.6, resulting in the bit-rate overhead of 1.5% and 0.6% for Random Access (RA) and Low Delay P (LDP) configurations, respectively. In addition, the similar speed-up is observed for sequences with mainly Computer-Generated Imagery (CGI) content while trading off the bit rate overhead of up to 5.2%.
Robust video transmission with distributed source coded auxiliary channel.
Wang, Jiajun; Majumdar, Abhik; Ramchandran, Kannan
2009-12-01
We propose a novel solution to the problem of robust, low-latency video transmission over lossy channels. Predictive video codecs, such as MPEG and H.26x, are very susceptible to prediction mismatch between encoder and decoder or "drift" when there are packet losses. These mismatches lead to a significant degradation in the decoded quality. To address this problem, we propose an auxiliary codec system that sends additional information alongside an MPEG or H.26x compressed video stream to correct for errors in decoded frames and mitigate drift. The proposed system is based on the principles of distributed source coding and uses the (possibly erroneous) MPEG/H.26x decoder reconstruction as side information at the auxiliary decoder. The distributed source coding framework depends upon knowing the statistical dependency (or correlation) between the source and the side information. We propose a recursive algorithm to analytically track the correlation between the original source frame and the erroneous MPEG/H.26x decoded frame. Finally, we propose a rate-distortion optimization scheme to allocate the rate used by the auxiliary encoder among the encoding blocks within a video frame. We implement the proposed system and present extensive simulation results that demonstrate significant gains in performance both visually and objectively (on the order of 2 dB in PSNR over forward error correction based solutions and 1.5 dB in PSNR over intrarefresh based solutions for typical scenarios) under tight latency constraints.
Implementation of MPEG-2 encoder to multiprocessor system using multiple MVPs (TMS320C80)
NASA Astrophysics Data System (ADS)
Kim, HyungSun; Boo, Kenny; Chung, SeokWoo; Choi, Geon Y.; Lee, YongJin; Jeon, JaeHo; Park, Hyun Wook
1997-05-01
This paper presents the efficient algorithm mapping for the real-time MPEG-2 encoding on the KAIST image computing system (KICS), which has a parallel architecture using five multimedia video processors (MVPs). The MVP is a general purpose digital signal processor (DSP) of Texas Instrument. It combines one floating-point processor and four fixed- point DSPs on a single chip. The KICS uses the MVP as a primary processing element (PE). Two PEs form a cluster, and there are two processing clusters in the KICS. Real-time MPEG-2 encoder is implemented through the spatial and the functional partitioning strategies. Encoding process of spatially partitioned half of the video input frame is assigned to ne processing cluster. Two PEs perform the functionally partitioned MPEG-2 encoding tasks in the pipelined operation mode. One PE of a cluster carries out the transform coding part and the other performs the predictive coding part of the MPEG-2 encoding algorithm. One MVP among five MVPs is used for system control and interface with host computer. This paper introduces an implementation of the MPEG-2 algorithm with a parallel processing architecture.
A real-time inverse quantised transform for multi-standard with dynamic resolution support
NASA Astrophysics Data System (ADS)
Sun, Chi-Chia; Lin, Chun-Ying; Zhang, Ce
2016-06-01
In this paper, a real-time configurable intelligent property (IP) core is presented for image/video decoding process in compatibility with the standard MPEG-4 Visual and the standard H.264/AVC. The inverse quantised discrete cosine and integer transform can be used to perform inverse quantised discrete cosine transform and inverse quantised inverse integer transforms which only required shift and add operations. Meanwhile, COordinate Rotation DIgital Computer iterations and compensation steps are adjustable in order to compensate for the video compression quality regarding various data throughput. The implementations are embedded in publicly available software XVID Codes 1.2.2 for the standard MPEG-4 Visual and the H.264/AVC reference software JM 16.1, where the experimental results show that the balance between the computational complexity and video compression quality is retained. At the end, FPGA synthesised results show that the proposed IP core can bring advantages to low hardware costs and also provide real-time performance for Full HD and 4K-2K video decoding.
A new DWT/MC/DPCM video compression framework based on EBCOT
NASA Astrophysics Data System (ADS)
Mei, L. M.; Wu, H. R.; Tan, D. M.
2005-07-01
A novel Discrete Wavelet Transform (DWT)/Motion Compensation (MC)/Differential Pulse Code Modulation (DPCM) video compression framework is proposed in this paper. Although the Discrete Cosine Transform (DCT)/MC/DPCM is the mainstream framework for video coders in industry and international standards, the idea of DWT/MC/DPCM has existed for more than one decade in the literature and the investigation is still undergoing. The contribution of this work is twofold. Firstly, the Embedded Block Coding with Optimal Truncation (EBCOT) is used here as the compression engine for both intra- and inter-frame coding, which provides good compression ratio and embedded rate-distortion (R-D) optimization mechanism. This is an extension of the EBCOT application from still images to videos. Secondly, this framework offers a good interface for the Perceptual Distortion Measure (PDM) based on the Human Visual System (HVS) where the Mean Squared Error (MSE) can be easily replaced with the PDM in the R-D optimization. Some of the preliminary results are reported here. They are also compared with benchmarks such as MPEG-2 and MPEG-4 version 2. The results demonstrate that under specified condition the proposed coder outperforms the benchmarks in terms of rate vs. distortion.
Fast bi-directional prediction selection in H.264/MPEG-4 AVC temporal scalable video coding.
Lin, Hung-Chih; Hang, Hsueh-Ming; Peng, Wen-Hsiao
2011-12-01
In this paper, we propose a fast algorithm that efficiently selects the temporal prediction type for the dyadic hierarchical-B prediction structure in the H.264/MPEG-4 temporal scalable video coding (SVC). We make use of the strong correlations in prediction type inheritance to eliminate the superfluous computations for the bi-directional (BI) prediction in the finer partitions, 16×8/8×16/8×8 , by referring to the best temporal prediction type of 16 × 16. In addition, we carefully examine the relationship in motion bit-rate costs and distortions between the BI and the uni-directional temporal prediction types. As a result, we construct a set of adaptive thresholds to remove the unnecessary BI calculations. Moreover, for the block partitions smaller than 8 × 8, either the forward prediction (FW) or the backward prediction (BW) is skipped based upon the information of their 8 × 8 partitions. Hence, the proposed schemes can efficiently reduce the extensive computational burden in calculating the BI prediction. As compared to the JSVM 9.11 software, our method saves the encoding time from 48% to 67% for a large variety of test videos over a wide range of coding bit-rates and has only a minor coding performance loss. © 2011 IEEE
Online Studies on Variation in Orthopedic Surgery: Computed Tomography in MPEG4 Versus DICOM Format.
Mellema, Jos J; Mallee, Wouter H; Guitton, Thierry G; van Dijk, C Niek; Ring, David; Doornberg, Job N
2017-10-01
The purpose of this study was to compare the observer participation and satisfaction as well as interobserver reliability between two online platforms, Science of Variation Group (SOVG) and Traumaplatform Study Collaborative, for the evaluation of complex tibial plateau fractures using computed tomography in MPEG4 and DICOM format. A total of 143 observers started with the online evaluation of 15 complex tibial plateau fractures via either the SOVG or Traumaplatform Study Collaborative websites using MPEG4 videos or a DICOM viewer, respectively. Observers were asked to indicate the absence or presence of four tibial plateau fracture characteristics and to rate their satisfaction with the evaluation as provided by the respective online platforms. The observer participation rate was significantly higher in the SOVG (MPEG4 video) group compared to that in the Traumaplatform Study Collaborative (DICOM viewer) group (75 and 43%, respectively; P < 0.001). The median observer satisfaction with the online evaluation was seven (range, 0-10) using MPEG4 video compared to six (range, 1-9) using DICOM viewer (P = 0.11). The interobserver reliability for recognition of fracture characteristics in complex tibial plateau fractures was higher for the evaluation using MPEG4 video. In conclusion, observer participation and interobserver reliability for the characterization of tibial plateau fractures was greater with MPEG4 videos than with a standard DICOM viewer, while there was no difference in observer satisfaction. Future reliability studies should account for the method of delivering images.
MPEG4: coding for content, interactivity, and universal accessibility
NASA Astrophysics Data System (ADS)
Reader, Cliff
1996-01-01
MPEG4 is a natural extension of audiovisual coding, and yet from many perspectives breaks new ground as a standard. New coding techniques are being introduced, of course, but they will work on new data structures. The standard itself has a new architecture, and will use a new operational model when implemented on equipment that is likely to have innovative system architecture. The author introduces the background developments in technology and applications that are driving or enabling the standard, introduces the focus of MPEG4, and enumerates the new functionalities to be supported. Key applications in interactive TV and heterogeneous environments are discussed. The architecture of MPEG4 is described, followed by a discussion of the multiphase MPEG4 communication scenarios, and issues of practical implementation of MPEG4 terminals. The paper concludes with a description of the MPEG4 workplan. In summary, MPEG4 has two fundamental attributes. First, it is the coding of audiovisual objects, which may be natural or synthetic data in two or three dimensions. Second, the heart of MPEG4 is its syntax: the MPEG4 Syntactic Descriptive Language -- MSDL.
Subjective evaluation of next-generation video compression algorithms: a case study
NASA Astrophysics Data System (ADS)
De Simone, Francesca; Goldmann, Lutz; Lee, Jong-Seok; Ebrahimi, Touradj; Baroncini, Vittorio
2010-08-01
This paper describes the details and the results of the subjective quality evaluation performed at EPFL, as a contribution to the effort of the Joint Collaborative Team on Video Coding (JCT-VC) for the definition of the next-generation video coding standard. The performance of 27 coding technologies have been evaluated with respect to two H.264/MPEG-4 AVC anchors, considering high definition (HD) test material. The test campaign involved a total of 494 naive observers and took place over a period of four weeks. While similar tests have been conducted as part of the standardization process of previous video coding technologies, the test campaign described in this paper is by far the most extensive in the history of video coding standardization. The obtained subjective quality scores show high consistency and support an accurate comparison of the performance of the different coding solutions.
MPEG-4 ASP SoC receiver with novel image enhancement techniques for DAB networks
NASA Astrophysics Data System (ADS)
Barreto, D.; Quintana, A.; García, L.; Callicó, G. M.; Núñez, A.
2007-05-01
This paper presents a system for real-time video reception in low-power mobile devices using Digital Audio Broadcast (DAB) technology for transmission. A demo receiver terminal is designed into a FPGA platform using the Advanced Simple Profile (ASP) MPEG-4 standard for video decoding. In order to keep the demanding DAB requirements, the bandwidth of the encoded sequence must be drastically reduced. In this sense, prior to the MPEG-4 coding stage, a pre-processing stage is performed. It is firstly composed by a segmentation phase according to motion and texture based on the Principal Component Analysis (PCA) of the input video sequence, and secondly by a down-sampling phase, which depends on the segmentation results. As a result of the segmentation task, a set of texture and motion maps are obtained. These motion and texture maps are also included into the bit-stream as user data side-information and are therefore known to the receiver. For all bit-rates, the whole encoder/decoder system proposed in this paper exhibits higher image visual quality than the alternative encoding/decoding method, assuming equal image sizes. A complete analysis of both techniques has also been performed to provide the optimum motion and texture maps for the global system, which has been finally validated for a variety of video sequences. Additionally, an optimal HW/SW partition for the MPEG-4 decoder has been studied and implemented over a Programmable Logic Device with an embedded ARM9 processor. Simulation results show that a throughput of 15 QCIF frames per second can be achieved with low area and low power implementation.
Audiovisual quality evaluation of low-bitrate video
NASA Astrophysics Data System (ADS)
Winkler, Stefan; Faller, Christof
2005-03-01
Audiovisual quality assessment is a relatively unexplored topic. We designed subjective experiments for audio, video, and audiovisual quality using content and encoding parameters representative of video for mobile applications. Our focus were the MPEG-4 AVC (a.k.a. H.264) and AAC coding standards. Our goals in this study are two-fold: we want to understand the interactions between audio and video in terms of perceived audiovisual quality, and we use the subjective data to evaluate the prediction performance of our non-reference video and audio quality metrics.
Overview of MPEG internet video coding
NASA Astrophysics Data System (ADS)
Wang, R. G.; Li, G.; Park, S.; Kim, J.; Huang, T.; Jang, E. S.; Gao, W.
2015-09-01
MPEG has produced standards that have provided the industry with the best video compression technologies. In order to address the diversified needs of the Internet, MPEG issued the Call for Proposals (CfP) for internet video coding in July, 2011. It is anticipated that any patent declaration associated with the Baseline Profile of this standard will indicate that the patent owner is prepared to grant a free of charge license to an unrestricted number of applicants on a worldwide, non-discriminatory basis and under other reasonable terms and conditions to make, use, and sell implementations of the Baseline Profile of this standard in accordance with the ITU-T/ITU-R/ISO/IEC Common Patent Policy. Three different codecs had responded to the CfP, which are WVC, VCB and IVC. WVC was proposed jointly by Apple, Cisco, Fraunhofer HHI, Magnum Semiconductor, Polycom and RIM etc. it's in fact AVC baseline. VCB was proposed by Google, and it's in fact VP8. IVC was proposed by several Universities (Peking University, Tsinghua University, Zhejiang University, Hanyang University and Korea Aerospace University etc.) and its coding tools was developed from Zero. In this paper, we give an overview of the coding tools in IVC, and evaluate its performance by comparing it with WVC, VCB and AVC High Profile.
Adaptation of facial synthesis to parameter analysis in MPEG-4 visual communication
NASA Astrophysics Data System (ADS)
Yu, Lu; Zhang, Jingyu; Liu, Yunhai
2000-12-01
In MPEG-4, Facial Definition Parameters (FDPs) and Facial Animation Parameters (FAPs) are defined to animate 1 a facial object. Most of the previous facial animation reconstruction systems were focused on synthesizing animation from manually or automatically generated FAPs but not the FAPs extracted from natural video scene. In this paper, an analysis-synthesis MPEG-4 visual communication system is established, in which facial animation is reconstructed from FAPs extracted from natural video scene.
NASA Astrophysics Data System (ADS)
Lee, Feifei; Kotani, Koji; Chen, Qiu; Ohmi, Tadahiro
2010-02-01
In this paper, a fast search algorithm for MPEG-4 video clips from video database is proposed. An adjacent pixel intensity difference quantization (APIDQ) histogram is utilized as the feature vector of VOP (video object plane), which had been reliably applied to human face recognition previously. Instead of fully decompressed video sequence, partially decoded data, namely DC sequence of the video object are extracted from the video sequence. Combined with active search, a temporal pruning algorithm, fast and robust video search can be realized. The proposed search algorithm has been evaluated by total 15 hours of video contained of TV programs such as drama, talk, news, etc. to search for given 200 MPEG-4 video clips which each length is 15 seconds. Experimental results show the proposed algorithm can detect the similar video clip in merely 80ms, and Equal Error Rate (ERR) of 2 % in drama and news categories are achieved, which are more accurately and robust than conventional fast video search algorithm.
Visual saliency in MPEG-4 AVC video stream
NASA Astrophysics Data System (ADS)
Ammar, M.; Mitrea, M.; Hasnaoui, M.; Le Callet, P.
2015-03-01
Visual saliency maps already proved their efficiency in a large variety of image/video communication application fields, covering from selective compression and channel coding to watermarking. Such saliency maps are generally based on different visual characteristics (like color, intensity, orientation, motion,…) computed from the pixel representation of the visual content. This paper resumes and extends our previous work devoted to the definition of a saliency map solely extracted from the MPEG-4 AVC stream syntax elements. The MPEG-4 AVC saliency map thus defined is a fusion of static and dynamic map. The static saliency map is in its turn a combination of intensity, color and orientation features maps. Despite the particular way in which all these elementary maps are computed, the fusion techniques allowing their combination plays a critical role in the final result and makes the object of the proposed study. A total of 48 fusion formulas (6 for combining static features and, for each of them, 8 to combine static to dynamic features) are investigated. The performances of the obtained maps are evaluated on a public database organized at IRCCyN, by computing two objective metrics: the Kullback-Leibler divergence and the area under curve.
AVC/H.264 patent portfolio license
NASA Astrophysics Data System (ADS)
Horn, Lawrence A.
2004-11-01
MPEG LA, LLC recently announced terms of a joint patent license for the AVC (a/k/a H.264) Standard (ISO/IEC IS 14496-10: Information technology -- Coding of audio-visual objects -- Part 10: Advanced Video Coding | ITU-T Rec. H.264: Series H: Audiovisual and Multimedia Systems: Infrastructure of audiovisual services -- Coding of moving video: Advanced video coding for generic audiovisual services). Like MPEG LA"s other licenses, the AVC Patent Portfolio License is offered for the convenience of the marketplace as an alternative enabling users to access essential intellectual property owned by many patent holders under a single license rather than negotiating licenses with each of them individually. The AVC Patent Portfolio License includes essential patents owned by Columbia Innovation Enterprises; Electronics and Telecommunications Research Institute (ETRI); France Télécom, société anonyme; Fujitsu Limited; Koninklijke Philips Electronics N.V.; Matsushita Electric Industrial Co., Ltd.; Microsoft Corporation; Mitsubishi Electric Corporation; Robert Bosch GmbH; Samsung Electronics Co., Ltd.; Sharp Kabushiki Kaisha; Sony Corporation; Toshiba Corporation; and Victor Company of Japan, Limited. MPEG LA"s objective is to provide worldwide access to as much AVC essential intellectual property as possible for the benefit of AVC users. Therefore, any party that believes it has essential patents is welcome to submit them for evaluation of their essentiality and inclusion in the License if found essential.
Layered Wyner-Ziv video coding.
Xu, Qian; Xiong, Zixiang
2006-12-01
Following recent theoretical works on successive Wyner-Ziv coding (WZC), we propose a practical layered Wyner-Ziv video coder using the DCT, nested scalar quantization, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information at the decoder). Our main novelty is to use the base layer of a standard scalable video coder (e.g., MPEG-4/H.26L FGS or H.263+) as the decoder side information and perform layered WZC for quality enhancement. Similar to FGS coding, there is no performance difference between layered and monolithic WZC when the enhancement bitstream is generated in our proposed coder. Using an H.26L coded version as the base layer, experiments indicate that WZC gives slightly worse performance than FGS coding when the channel (for both the base and enhancement layers) is noiseless. However, when the channel is noisy, extensive simulations of video transmission over wireless networks conforming to the CDMA2000 1X standard show that H.26L base layer coding plus Wyner-Ziv enhancement layer coding are more robust against channel errors than H.26L FGS coding. These results demonstrate that layered Wyner-Ziv video coding is a promising new technique for video streaming over wireless networks.
NASA Astrophysics Data System (ADS)
Liu, Leibo; Chen, Yingjie; Yin, Shouyi; Lei, Hao; He, Guanghui; Wei, Shaojun
2014-07-01
A VLSI architecture for entropy decoder, inverse quantiser and predictor is proposed in this article. This architecture is used for decoding video streams of three standards on a single chip, i.e. H.264/AVC, AVS (China National Audio Video coding Standard) and MPEG2. The proposed scheme is called MPMP (Macro-block-Parallel based Multilevel Pipeline), which is intended to improve the decoding performance to satisfy the real-time requirements while maintaining a reasonable area and power consumption. Several techniques, such as slice level pipeline, MB (Macro-Block) level pipeline, MB level parallel, etc., are adopted. Input and output buffers for the inverse quantiser and predictor are shared by the decoding engines for H.264, AVS and MPEG2, therefore effectively reducing the implementation overhead. Simulation shows that decoding process consumes 512, 435 and 438 clock cycles per MB in H.264, AVS and MPEG2, respectively. Owing to the proposed techniques, the video decoder can support H.264 HP (High Profile) 1920 × 1088@30fps (frame per second) streams, AVS JP (Jizhun Profile) 1920 × 1088@41fps streams and MPEG2 MP (Main Profile) 1920 × 1088@39fps streams when exploiting a 200 MHz working frequency.
A MPEG-4 encoder based on TMS320C6416
NASA Astrophysics Data System (ADS)
Li, Gui-ju; Liu, Wei-ning
2013-08-01
Engineering and products need to achieve real-time video encoding by DSP, but the high computational complexity and huge amount of data requires that system has high data throughput. In this paper, a real-time MPEG-4 video encoder is designed based on TMS320C6416 platform. The kernel is the DSP of TMS320C6416T and FPGA chip f as the organization and management of video data. In order to control the flow of input and output data. Encoded stream is output using the synchronous serial port. The system has the clock frequency of 1GHz and has up to 8000 MIPS speed processing capacity when running at full speed. Due to the low coding efficiency of MPEG-4 video encoder transferred directly to DSP platform, it is needed to improve the program structure, data structures and algorithms combined with TMS320C6416T characteristics. First: Design the image storage architecture by balancing the calculation spending, storage space cost and EDMA read time factors. Open up a more buffer in memory, each buffer cache 16 lines of video data to be encoded, reconstruction image and reference image including search range. By using the variable alignment mode of the DSP, modifying the definition of structure variables and change the look-up table which occupy larger space with a direct calculation array to save memory space. After the program structure optimization, the program code, all variables, buffering buffers and the interpolation image including the search range can be placed in memory. Then, as to the time-consuming process modules and some functions which are called many times, the corresponding modules are written in parallel assembly language of TMS320C6416T which can increase the running speed. Besides, the motion estimation algorithm is improved by using a cross-hexagon search algorithm, The search speed can be increased obviously. Finally, the execution time, signal-to-noise ratio and compression ratio of a real-time image acquisition sequence is given. The experimental results show that the designed encoder in this paper can accomplish real-time encoding of a 768× 576, 25 frames per second grayscale video. The code rate is 1.5M bits per second.
NASA Technical Reports Server (NTRS)
Ivancic, William D.
1998-01-01
Various issues associated with satellite/terrestrial end-to-end communication interoperability are presented in viewgraph form. Specific topics include: 1) Quality of service; 2) ATM performance characteristics; 3) MPEG-2 transport stream mapping to AAL-5; 4) Observation and discussion of compressed video tests over ATM; 5) Digital video over satellites status; 6) Satellite link configurations; 7) MPEG-2 over ATM with binomial errors; 8) MPEG-2 over ATM channel characteristics; 8) MPEG-2 over ATM over emulated satellites; 9) MPEG-2 transport stream with errors; and a 10) Dual decoder test.
Telesign: a videophone system for sign language distant communication
NASA Astrophysics Data System (ADS)
Mozelle, Gerard; Preteux, Francoise J.; Viallet, Jean-Emmanuel
1998-09-01
This paper presents a low bit rate videophone system for deaf people communicating by means of sign language. Classic video conferencing systems have focused on head and shoulders sequences which are not well-suited for sign language video transmission since hearing impaired people also use their hands and arms to communicate. To address the above-mentioned functionality, we have developed a two-step content-based video coding system based on: (1) A segmentation step. Four or five video objects (VO) are extracted using a cooperative approach between color-based and morphological segmentation. (2) VO coding are achieved by using a standardized MPEG-4 video toolbox. Results of encoded sign language video sequences, presented for three target bit rates (32 kbits/s, 48 kbits/s and 64 kbits/s), demonstrate the efficiency of the approach presented in this paper.
Psychovisual masks and intelligent streaming RTP techniques for the MPEG-4 standard
NASA Astrophysics Data System (ADS)
Mecocci, Alessandro; Falconi, Francesco
2003-06-01
In today multimedia audio-video communication systems, data compression plays a fundamental role by reducing the bandwidth waste and the costs of the infrastructures and equipments. Among the different compression standards, the MPEG-4 is becoming more and more accepted and widespread. Even if one of the fundamental aspects of this standard is the possibility of separately coding video objects (i.e. to separate moving objects from the background and adapt the coding strategy to the video content), currently implemented codecs work only at the full-frame level. In this way, many advantages of the flexible MPEG-4 syntax are missed. This lack is due both to the difficulties in properly segmenting moving objects in real scenes (featuring an arbitrary motion of the objects and of the acquisition sensor), and to the current use of these codecs, that are mainly oriented towards the market of DVD backups (a full-frame approach is enough for these applications). In this paper we propose a codec for MPEG-4 real-time object streaming, that codes separately the moving objects and the scene background. The proposed codec is capable of adapting its strategy during the transmission, by analysing the video currently transmitted and setting the coder parameters and modalities accordingly. For example, the background can be transmitted as a whole or by dividing it into "slightly-detailed" and "highly detailed" zones that are coded in different ways to reduce the bit-rate while preserving the perceived quality. The coder can automatically switch in real-time, from one modality to the other during the transmission, depending on the current video content. Psychovisual masks and other video-content based measurements have been used as inputs for a Self Learning Intelligent Controller (SLIC) that changes the parameters and the transmission modalities. The current implementation is based on the ISO 14496 standard code that allows Video Objects (VO) transmission (other Open Source Codes like: DivX, Xvid, and Cisco"s Mpeg-4IP, have been analyzed but, as for today, they do not support VO). The original code has been deeply modified to integrate the SLIC and to adapt it for real-time streaming. A personal RTP (Real Time Protocol) has been defined and a Client-Server application has been developed. The viewer can decode and demultiplex the stream in real-time, while adapting to the changing modalities adopted by the Server according to the current video content. The proposed codec works as follows: the image background is separated by means of a segmentation module and it is transmitted by means of a wavelet compression scheme similar to that used in the JPEG2000. The VO are coded separately and multiplexed with the background stream. At the receiver the stream is demultiplexed to obtain the background and the VO that are subsequently pasted together. The final quality depends on many factors, in particular: the quantization parameters, the Group Of Video Object (GOV) length, the GOV structure (i.e. the number of I-P-B VOP), the search area for motion compensation. These factors are strongly related to the following measurement parameters (that have been defined during the development): the Objects Apparent Size (OAS) in the scene, the Video Object Incidence factor (VOI), the temporal correlation (measured through the Normalized Mean SAD, NMSAD). The SLIC module analyzes the currently transmitted video and selects the most appropriate settings by choosing from a predefined set of transmission modalities. For example, in the case of a highly temporal correlated sequence, the number of B-VOP is increased to improve the compression ratio. The strategy for the selection of the number of B-VOP turns out to be very different from those reported in the literature for B-frames (adopted for MPEG-1 and MPEG-2), due to the different behaviour of the temporal correlation when limited only to moving objects. The SLIC module also decides how to transmit the background. In our implementation we adopted the Visual Brain theory i.e. the study of what the "psychic eye" can get from a scene. According to this theory, a Psychomask Image Analysis (PIA) module has been developed to extract the visually homogeneous regions of the background. The PIA module produces two complementary masks one for the visually low variance zones and one for the higly variable zones; these zones are compressed with different strategies and encoded into two multiplexed streams. From practical experiments it turned out that the separate coding is advantageous only if the low variance zones exceed 50% of the whole background area (due to the overhead given by the need of transmitting the zone masks). The SLIC module takes care of deciding the appropriate transmission modality by analyzing the results produced by the PIA module. The main features of this codec are: low bitrate, good image quality and coding speed. The current implementation runs in real-time on standard PC platforms, the major limitation being the fixed position of the acquisition sensor. This limitation is due to the difficulties in separating moving objects from the background when the acquisition sensor moves. Our current real-time segmentation module does not produce suitable results if the acquisition sensor moves (only slight oscillatory movements are tolerated). In any case, the system is particularly suitable for tele surveillance applications at low bit-rates, where the camera is usually fixed or alternates among some predetermined positions (our segmentation module is capable of accurately separate moving objects from the static background when the acquisition sensor stops, even if different scenes are seen as a result of the sensor displacements). Moreover, the proposed architecture is general, in the sense that when real-time, robust segmentation systems (capable of separating objects in real-time from the background while the sensor itself is moving) will be available, they can be easily integrated while leaving the rest of the system unchanged. Experimental results related to real sequences for traffic monitoring and for people tracking and afety control are reported and deeply discussed in the paper. The whole system has been implemented in standard ANSI C code and currently runs on standard PCs under Microsoft Windows operating system (Windows 2000 pro and Windows XP).
HD-DVD: the next consumer electronics revolution?
NASA Astrophysics Data System (ADS)
Topiwala, Pankaj N.
2003-11-01
The DVD is emerging as one of the world's favorite consumer electronics product, rapidly replacing analog videotape in the US and many other markets at prodigious rates. It is capable of offering a full feature-length, standard-definition movie in crisp rendition on TV. TV technology is itself in the midst of switching from analog to digital TV, with high-definition being the main draw. In fact, the US government has been advocating that switch over to digital TC, with both carrot and stick approaches, for nearly two decades, with only modest results--about 2% penetration. Under FCC herding, broadcasters are falling in the digital line--slowly, and sans profit. Meanwhile, delivery of HD content on portable media would be a great solution. Indeed, a new disk technology based on blue lasers is coming; but its widespread adoption may yet be four to five yeras away. But a promising new video codec--H.264/MPEG-4 AVC, the latest coding standard jointly developed by the Video Coding Experts Group (VCEG) of ITU-T and Moving Picture Experts Group (MPEG) of ISO/IEC, just might be the missing link. It offers substantial coding gains over MPEG-2, used in today's DVDs. With H.264, it appears possible to put HD movies on today's red-laser DVDs. Since consumers love DVDs, and HD--when they can see it, can H.264 and HD-DVD ignite a new revolution, now? It may have a huge impact on (H)DTV adoption rates.
NASA Astrophysics Data System (ADS)
Bross, Benjamin; Alvarez-Mesa, Mauricio; George, Valeri; Chi, Chi Ching; Mayer, Tobias; Juurlink, Ben; Schierl, Thomas
2013-09-01
The new High Efficiency Video Coding Standard (HEVC) was finalized in January 2013. Compared to its predecessor H.264 / MPEG4-AVC, this new international standard is able to reduce the bitrate by 50% for the same subjective video quality. This paper investigates decoder optimizations that are needed to achieve HEVC real-time software decoding on a mobile processor. It is shown that HEVC real-time decoding up to high definition video is feasible using instruction extensions of the processor while decoding 4K ultra high definition video in real-time requires additional parallel processing. For parallel processing, a picture-level parallel approach has been chosen because it is generic and does not require bitstreams with special indication.
Measuring perceived video quality of MPEG enhancement by people with impaired vision
Fullerton, Matthew; Woods, Russell L.; Vera-Diaz, Fuensanta A.; Peli, Eli
2007-01-01
We used a new method to measure the perceived quality of contrast-enhanced motion video. Patients with impaired vision (n = 24) and normally-sighted subjects (n = 6) adjusted the level of MPEG-based enhancement of 8 videos (4 minutes each) drawn from 4 categories. They selected the level of enhancement that provided the preferred view of the videos, using a reducing-step-size staircase procedure. Most patients made consistent selections of the preferred level of enhancement, indicating an appreciation of and a perceived benefit from the MPEG-based enhancement. The selections varied between patients and were correlated with letter contrast sensitivity, but the selections were not affected by training, experience or video category. We measured just noticeable differences (JNDs) directly for videos, and mapped the image manipulation (enhancement in our case) onto an approximately linear perceptual space. These tools and approaches will be of value in other evaluations of the image quality of motion video manipulations. PMID:18059909
Barbier, Paolo; Alimento, Marina; Berna, Giovanni; Cavoretto, Dario; Celeste, Fabrizio; Muratori, Manuela; Guazzi, Maurizio D
2004-01-01
Tele-echocardiography is not widely used because of lengthy transmission times when using standard Motion Pictures Expert Groups (MPEG)-2 lossy compression algorythms, unless expensive high bandwidth lines are used. We sought to validate the newer MPEG-4 algorythms to allow further reduction in echocardiographic motion video file size. Four cardiologists expert in echocardiography read blindly 165 randomized uncompressed and compressed 2D and color Doppler normal and pathologic motion images. One Digital Video and 3 MPEG-4 compression algorythms were tested, the latter at 3 decreasing compression quality levels (100%, 65% and 40%). Mean diagnostic and image quality scores were computed for each file and compared across the 3 compression levels using uncompressed files as controls. File dimensions decreased from a range of uncompressed 12-83 MB to MPEG-4 0.03-2.3 MB. All algorythms showed mean scores that were not significantly different from uncompressed source, except the MPEG-4 DivX algorythm at the highest selected compression (40%, p=.002). These data support the use of MPEG-4 compression to reduce echocardiographic motion image size for transmission purposes, allowing cost reduction through use of low bandwidth lines.
NASA Astrophysics Data System (ADS)
Jo, Hyunho; Sim, Donggyu
2014-06-01
We present a bitstream decoding processor for entropy decoding of variable length coding-based multiformat videos. Since most of the computational complexity of entropy decoders comes from bitstream accesses and table look-up process, the developed bitstream processing unit (BsPU) has several designated instructions to access bitstreams and to minimize branch operations in the table look-up process. In addition, the instruction for bitstream access has the capability to remove emulation prevention bytes (EPBs) of H.264/AVC without initial delay, repeated memory accesses, and additional buffer. Experimental results show that the proposed method for EPB removal achieves a speed-up of 1.23 times compared to the conventional EPB removal method. In addition, the BsPU achieves speed-ups of 5.6 and 3.5 times in entropy decoding of H.264/AVC and MPEG-4 Visual bitstreams, respectively, compared to an existing processor without designated instructions and a new table mapping algorithm. The BsPU is implemented on a Xilinx Virtex5 LX330 field-programmable gate array. The MPEG-4 Visual (ASP, Level 5) and H.264/AVC (Main Profile, Level 4) are processed using the developed BsPU with a core clock speed of under 250 MHz in real time.
Accelerating a MPEG-4 video decoder through custom software/hardware co-design
NASA Astrophysics Data System (ADS)
Díaz, Jorge L.; Barreto, Dacil; García, Luz; Marrero, Gustavo; Carballo, Pedro P.; Núñez, Antonio
2007-05-01
In this paper we present a novel methodology to accelerate an MPEG-4 video decoder using software/hardware co-design for wireless DAB/DMB networks. Software support includes the services provided by the embedded kernel μC/OS-II, and the application tasks mapped to software. Hardware support includes several custom co-processors and a communication architecture with bridges to the main system bus and with a dual port SRAM. Synchronization among tasks is achieved at two levels, by a hardware protocol and by kernel level scheduling services. Our reference application is an MPEG-4 video decoder composed of several software functions and written using a special C++ library named CASSE. Profiling and space exploration techniques were used previously over the Advanced Simple Profile (ASP) MPEG-4 decoder to determinate the best HW/SW partition developed here. This research is part of the ARTEMI project and its main goal is the establishment of methodologies for the design of real-time complex digital systems using Programmable Logic Devices with embedded microprocessors as target technology and the design of multimedia systems for broadcasting networks as reference application.
Barbier, Paolo; Alimento, Marina; Berna, Giovanni; Celeste, Fabrizio; Gentile, Francesco; Mantero, Antonio; Montericcio, Vincenzo; Muratori, Manuela
2007-05-01
Large files produced by standard compression algorithms slow down spread of digital and tele-echocardiography. We validated echocardiographic video high-grade compression with the new Motion Pictures Expert Groups (MPEG)-4 algorithms with a multicenter study. Seven expert cardiologists blindly scored (5-point scale) 165 uncompressed and compressed 2-dimensional and color Doppler video clips, based on combined diagnostic content and image quality (uncompressed files as references). One digital video and 3 MPEG-4 algorithms (WM9, MV2, and DivX) were used, the latter at 3 compression levels (0%, 35%, and 60%). Compressed file sizes decreased from 12 to 83 MB to 0.03 to 2.3 MB (1:1051-1:26 reduction ratios). Mean SD of differences was 0.81 for intraobserver variability (uncompressed and digital video files). Compared with uncompressed files, only the DivX mean score at 35% (P = .04) and 60% (P = .001) compression was significantly reduced. At subcategory analysis, these differences were still significant for gray-scale and fundamental imaging but not for color or second harmonic tissue imaging. Original image quality, session sequence, compression grade, and bitrate were all independent determinants of mean score. Our study supports use of MPEG-4 algorithms to greatly reduce echocardiographic file sizes, thus facilitating archiving and transmission. Quality evaluation studies should account for the many independent variables that affect image quality grading.
2011 Joint Service Power Expo. Volume 2. Video Files
2011-05-05
Untitled Document 2011power.html[3/22/2016 1:21:48 PM] Files are in Adobe, AVCHD Video (.m2ts), .avi format, MPEG-4 Movie (.mp4), and Windows Media...Development, ABSL Power Solutions Inc. 12799 - “Utilization of a Ducted Wind Turbine in a Trailer -Mounted Renewable Energy Micro-grid”, Mr. Mark Matthews, VP...of Sales and Marketing, WindTamer Corporation and Mr. Adeeb Saba WindTamer MPEG-4 Movie (.mp4) SESSION 16 On-Board Vehicle Power (OBVP) 1
NASA Astrophysics Data System (ADS)
Chen, Zhenzhong; Han, Junwei; Ngan, King Ngi
2005-10-01
MPEG-4 treats a scene as a composition of several objects or so-called video object planes (VOPs) that are separately encoded and decoded. Such a flexible video coding framework makes it possible to code different video object with different distortion scale. It is necessary to analyze the priority of the video objects according to its semantic importance, intrinsic properties and psycho-visual characteristics such that the bit budget can be distributed properly to video objects to improve the perceptual quality of the compressed video. This paper aims to provide an automatic video object priority definition method based on object-level visual attention model and further propose an optimization framework for video object bit allocation. One significant contribution of this work is that the human visual system characteristics are incorporated into the video coding optimization process. Another advantage is that the priority of the video object can be obtained automatically instead of fixing weighting factors before encoding or relying on the user interactivity. To evaluate the performance of the proposed approach, we compare it with traditional verification model bit allocation and the optimal multiple video object bit allocation algorithms. Comparing with traditional bit allocation algorithms, the objective quality of the object with higher priority is significantly improved under this framework. These results demonstrate the usefulness of this unsupervised subjective quality lifting framework.
Feasibility study of a real-time operating system for a multichannel MPEG-4 encoder
NASA Astrophysics Data System (ADS)
Lehtoranta, Olli; Hamalainen, Timo D.
2005-03-01
Feasibility of DSP/BIOS real-time operating system for a multi-channel MPEG-4 encoder is studied. Performances of two MPEG-4 encoder implementations with and without the operating system are compared in terms of encoding frame rate and memory requirements. The effects of task switching frequency and number of parallel video channels to the encoding frame rate are measured. The research is carried out on a 200 MHz TMS320C6201 fixed point DSP using QCIF (176x144 pixels) video format. Compared to a traditional DSP implementation without an operating system, inclusion of DSP/BIOS reduces total system throughput only by 1 QCIF frames/s. The operating system has 6 KB data memory overhead and program memory requirement of 15.7 KB. Hence, the overhead is considered low enough for resource critical mobile video applications.
Verification testing of the compression performance of the HEVC screen content coding extensions
NASA Astrophysics Data System (ADS)
Sullivan, Gary J.; Baroncini, Vittorio A.; Yu, Haoping; Joshi, Rajan L.; Liu, Shan; Xiu, Xiaoyu; Xu, Jizheng
2017-09-01
This paper reports on verification testing of the coding performance of the screen content coding (SCC) extensions of the High Efficiency Video Coding (HEVC) standard (Rec. ITU-T H.265 | ISO/IEC 23008-2 MPEG-H Part 2). The coding performance of HEVC screen content model (SCM) reference software is compared with that of the HEVC test model (HM) without the SCC extensions, as well as with the Advanced Video Coding (AVC) joint model (JM) reference software, for both lossy and mathematically lossless compression using All-Intra (AI), Random Access (RA), and Lowdelay B (LB) encoding structures and using similar encoding techniques. Video test sequences in 1920×1080 RGB 4:4:4, YCbCr 4:4:4, and YCbCr 4:2:0 colour sampling formats with 8 bits per sample are tested in two categories: "text and graphics with motion" (TGM) and "mixed" content. For lossless coding, the encodings are evaluated in terms of relative bit-rate savings. For lossy compression, subjective testing was conducted at 4 quality levels for each coding case, and the test results are presented through mean opinion score (MOS) curves. The relative coding performance is also evaluated in terms of Bjøntegaard-delta (BD) bit-rate savings for equal PSNR quality. The perceptual tests and objective metric measurements show a very substantial benefit in coding efficiency for the SCC extensions, and provided consistent results with a high degree of confidence. For TGM video, the estimated bit-rate savings ranged from 60-90% relative to the JM and 40-80% relative to the HM, depending on the AI/RA/LB configuration category and colour sampling format.
NASA Astrophysics Data System (ADS)
Hanhart, Philippe; Řeřábek, Martin; Ebrahimi, Touradj
2015-09-01
This paper reports the details and results of the subjective evaluations conducted at EPFL to evaluate the responses to the Call for Evidence (CfE) for High Dynamic Range (HDR) and Wide Color Gamut (WCG) Video Coding issued by Moving Picture Experts Group (MPEG). The CfE on HDR/WCG Video Coding aims to explore whether the coding efficiency and/or the functionality of the current version of HEVC standard can be signi_cantly improved for HDR and WCG content. In total, nine submissions, five for Category 1 and four for Category 3a, were compared to the HEVC Main 10 Profile based Anchor. More particularly, five HDR video contents, compressed at four bit rates by each proponent responding to the CfE, were used in the subjective evaluations. Further, the side-by-side presentation methodology was used for the subjective experiment to discriminate small differences between the Anchor and proponents. Subjective results shows that the proposals provide evidence that the coding efficiency can be improved in a statistically noticeable way over MPEG CfE Anchors in terms of perceived quality within the investigated content. The paper further benchmarks the selected objective metrics based on their correlations with the subjective ratings. It is shown that PSNR-DE1000, HDRVDP- 2, and PSNR-Lx can reliably detect visible differences between the proposed encoding solutions and current HEVC standard.
Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.
Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen
2014-02-01
The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.
Digital video technologies and their network requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. P. Tsang; H. Y. Chen; J. M. Brandt
1999-11-01
Coded digital video signals are considered to be one of the most difficult data types to transport due to their real-time requirements and high bit rate variability. In this study, the authors discuss the coding mechanisms incorporated by the major compression standards bodies, i.e., JPEG and MPEG, as well as more advanced coding mechanisms such as wavelet and fractal techniques. The relationship between the applications which use these coding schemes and their network requirements are the major focus of this study. Specifically, the authors relate network latency, channel transmission reliability, random access speed, buffering and network bandwidth with the variousmore » coding techniques as a function of the applications which use them. Such applications include High-Definition Television, Video Conferencing, Computer-Supported Collaborative Work (CSCW), and Medical Imaging.« less
A motion compensation technique using sliced blocks and its application to hybrid video coding
NASA Astrophysics Data System (ADS)
Kondo, Satoshi; Sasai, Hisao
2005-07-01
This paper proposes a new motion compensation method using "sliced blocks" in DCT-based hybrid video coding. In H.264 ? MPEG-4 Advance Video Coding, a brand-new international video coding standard, motion compensation can be performed by splitting macroblocks into multiple square or rectangular regions. In the proposed method, on the other hand, macroblocks or sub-macroblocks are divided into two regions (sliced blocks) by an arbitrary line segment. The result is that the shapes of the segmented regions are not limited to squares or rectangles, allowing the shapes of the segmented regions to better match the boundaries between moving objects. Thus, the proposed method can improve the performance of the motion compensation. In addition, adaptive prediction of the shape according to the region shape of the surrounding macroblocks can reduce overheads to describe shape information in the bitstream. The proposed method also has the advantage that conventional coding techniques such as mode decision using rate-distortion optimization can be utilized, since coding processes such as frequency transform and quantization are performed on a macroblock basis, similar to the conventional coding methods. The proposed method is implemented in an H.264-based P-picture codec and an improvement in bit rate of 5% is confirmed in comparison with H.264.
Evaluation of architectures for an ASP MPEG-4 decoder using a system-level design methodology
NASA Astrophysics Data System (ADS)
Garcia, Luz; Reyes, Victor; Barreto, Dacil; Marrero, Gustavo; Bautista, Tomas; Nunez, Antonio
2005-06-01
Trends in multimedia consumer electronics, digital video and audio, aim to reach users through low-cost mobile devices connected to data broadcasting networks with limited bandwidth. An emergent broadcasting network is the digital audio broadcasting network (DAB) which provides CD quality audio transmission together with robustness and efficiency techniques to allow good quality reception in motion conditions. This paper focuses on the system-level evaluation of different architectural options to allow low bandwidth digital video reception over DAB, based on video compression techniques. Profiling and design space exploration techniques are applied over the ASP MPEG-4 decoder in order to find out the best HW/SW partition given the application and platform constraints. An innovative SystemC-based system-level design tool, called CASSE, is being used for modelling, exploration and evaluation of different ASP MPEG-4 decoder HW/SW partitions. System-level trade offs and quantitative data derived from this analysis are also presented in this work.
Enabling MPEG-2 video playback in embedded systems through improved data cache efficiency
NASA Astrophysics Data System (ADS)
Soderquist, Peter; Leeser, Miriam E.
1999-01-01
Digital video decoding, enabled by the MPEG-2 Video standard, is an important future application for embedded systems, particularly PDAs and other information appliances. Many such system require portability and wireless communication capabilities, and thus face severe limitations in size and power consumption. This places a premium on integration and efficiency, and favors software solutions for video functionality over specialized hardware. The processors in most embedded system currently lack the computational power needed to perform video decoding, but a related and equally important problem is the required data bandwidth, and the need to cost-effectively insure adequate data supply. MPEG data sets are very large, and generate significant amounts of excess memory traffic for standard data caches, up to 100 times the amount required for decoding. Meanwhile, cost and power limitations restrict cache sizes in embedded systems. Some systems, including many media processors, eliminate caches in favor of memories under direct, painstaking software control in the manner of digital signal processors. Yet MPEG data has locality which caches can exploit if properly optimized, providing fast, flexible, and automatic data supply. We propose a set of enhancements which target the specific needs of the heterogeneous types within the MPEG decoder working set. These optimizations significantly improve the efficiency of small caches, reducing cache-memory traffic by almost 70 percent, and can make an enhanced 4 KB cache perform better than a standard 1 MB cache. This performance improvement can enable high-resolution, full frame rate video playback in cheaper, smaller system than woudl otherwise be possible.
Comparison of MPEG digital video with super VHS tape for diagnostic echocardiographic readings
NASA Technical Reports Server (NTRS)
Soble, J. S.; Yurow, G.; Brar, R.; Stamos, T.; Neumann, A.; Garcia, M.; Stoddard, M. F.; Cherian, P. K.; Bhamb, B.; Thomas, J. D.
1998-01-01
BACKGROUND: Digital recording of echocardiographic studies is on the clinical horizon. However, full digital capture of complete echocardiographic studies in traditional video format is impractical, given current storage capacity and network bandwidth. To overcome these constraints, we evaluated the diagnostic image quality of digital video by using MPEG (Motion Picture Experts Group) compression. METHODS AND RESULTS: Fifty-eight complete, consecutive studies were recorded simultaneously with the use of MPEG-1 and sVHS videotape. Each matched MPEG and sVHS study pair was reviewed by two from a total of six readers, and findings were recorded with the use of a detailed, computerized reporting tool. Intrareader and interreader discrepancies were characterized as major or minor and analyzed in total and for specific subgroups of findings (left and right ventricular parameters, valvular insufficiency, and left ventricular regional wall motion). Intrareader discrepancies were reviewed by a consensus panel for agreement with either MPEG or sVHS findings. There was an exact concordance between MPEG and sVHS readings in 83% of findings. The majority of discrepancies were minor, with major discrepancies in only 2.7% of findings. There was no difference in the rate of consensus panel agreement with MPEG or sVHS for instances of intrareader discrepancy, either in total or for any subgroup of findings. Interreader discrepancy rates were nearly identical for both MPEG and sVHS. CONCLUSIONS: MPEG-1 digital video is equivalent to sVHS videotape for diagnostic echocardiography. MPEG increases the range of practical options for digital echocardiography and offers, for the first time, the advantages of digital recording in a familiar video format.
Ho, B T; Tsai, M J; Wei, J; Ma, M; Saipetch, P
1996-01-01
A new method of video compression for angiographic images has been developed to achieve high compression ratio (~20:1) while eliminating block artifacts which leads to loss of diagnostic accuracy. This method adopts motion picture experts group's (MPEGs) motion compensated prediction to takes advantage of frame to frame correlation. However, in contrast to MPEG, the error images arising from mismatches in the motion estimation are encoded by discrete wavelet transform (DWT) rather than block discrete cosine transform (DCT). Furthermore, the authors developed a classification scheme which label each block in an image as intra, error, or background type and encode it accordingly. This hybrid coding can significantly improve the compression efficiency in certain eases. This method can be generalized for any dynamic image sequences applications sensitive to block artifacts.
Experiments in MPEG-4 content authoring, browsing, and streaming
NASA Astrophysics Data System (ADS)
Puri, Atul; Schmidt, Robert L.; Basso, Andrea; Civanlar, Mehmet R.
2000-12-01
In this paper, within the context of the MPEG-4 standard we report on preliminary experiments in three areas -- authoring of MPEG-4 content, a player/browser for MPEG-4 content, and streaming of MPEG-4 content. MPEG-4 is a new standard for coding of audiovisual objects; the core of MPEG-4 standard is complete while amendments are in various stages of completion. MPEG-4 addresses compression of audio and visual objects, their integration by scene description, and interactivity of users with such objects. MPEG-4 scene description is based on VRML like language for 3D scenes, extended to 2D scenes, and supports integration of 2D and 3D scenes. This scene description language is called BIFS. First, we introduce the basic concepts behind BIFS and then show with an example, textual authoring of different components needed to describe an audiovisual scene in BIFS; the textual BIFS is then saved as compressed binary file/s for storage or transmission. Then, we discuss a high level design of an MPEG-4 player/browser that uses the main components from authoring such as encoded BIFS stream, media files it refers to, and multiplexed object descriptor stream to play an MPEG-4 scene. We also discuss our extensions to such a player/browser. Finally, we present our work in streaming of MPEG-4 -- the payload format, modification to client MPEG-4 player/browser, server-side infrastructure and example content used in our MPEG-4 streaming experiments.
3D video coding: an overview of present and upcoming standards
NASA Astrophysics Data System (ADS)
Merkle, Philipp; Müller, Karsten; Wiegand, Thomas
2010-07-01
An overview of existing and upcoming 3D video coding standards is given. Various different 3D video formats are available, each with individual pros and cons. The 3D video formats can be separated into two classes: video-only formats (such as stereo and multiview video) and depth-enhanced formats (such as video plus depth and multiview video plus depth). Since all these formats exist of at least two video sequences and possibly additional depth data, efficient compression is essential for the success of 3D video applications and technologies. For the video-only formats the H.264 family of coding standards already provides efficient and widely established compression algorithms: H.264/AVC simulcast, H.264/AVC stereo SEI message, and H.264/MVC. For the depth-enhanced formats standardized coding algorithms are currently being developed. New and specially adapted coding approaches are necessary, as the depth or disparity information included in these formats has significantly different characteristics than video and is not displayed directly, but used for rendering. Motivated by evolving market needs, MPEG has started an activity to develop a generic 3D video standard within the 3DVC ad-hoc group. Key features of the standard are efficient and flexible compression of depth-enhanced 3D video representations and decoupling of content creation and display requirements.
A complexity-scalable software-based MPEG-2 video encoder.
Chen, Guo-bin; Lu, Xin-ning; Wang, Xing-guo; Liu, Ji-lin
2004-05-01
With the development of general-purpose processors (GPP) and video signal processing algorithms, it is possible to implement a software-based real-time video encoder on GPP, and its low cost and easy upgrade attract developers' interests to transfer video encoding from specialized hardware to more flexible software. In this paper, the encoding structure is set up first to support complexity scalability; then a lot of high performance algorithms are used on the key time-consuming modules in coding process; finally, at programming level, processor characteristics are considered to improve data access efficiency and processing parallelism. Other programming methods such as lookup table are adopted to reduce the computational complexity. Simulation results showed that these ideas could not only improve the global performance of video coding, but also provide great flexibility in complexity regulation.
MPEG-4 AVC saliency map computation
NASA Astrophysics Data System (ADS)
Ammar, M.; Mitrea, M.; Hasnaoui, M.
2014-02-01
A saliency map provides information about the regions inside some visual content (image, video, ...) at which a human observer will spontaneously look at. For saliency maps computation, current research studies consider the uncompressed (pixel) representation of the visual content and extract various types of information (intensity, color, orientation, motion energy) which are then fusioned. This paper goes one step further and computes the saliency map directly from the MPEG-4 AVC stream syntax elements with minimal decoding operations. In this respect, an a-priori in-depth study on the MPEG-4 AVC syntax elements is first carried out so as to identify the entities appealing the visual attention. Secondly, the MPEG-4 AVC reference software is completed with software tools allowing the parsing of these elements and their subsequent usage in objective benchmarking experiments. This way, it is demonstrated that an MPEG-4 saliency map can be given by a combination of static saliency and motion maps. This saliency map is experimentally validated under a robust watermarking framework. When included in an m-QIM (multiple symbols Quantization Index Modulation) insertion method, PSNR average gains of 2.43 dB, 2.15dB, and 2.37 dB are obtained for data payload of 10, 20 and 30 watermarked blocks per I frame, i.e. about 30, 60, and 90 bits/second, respectively. These quantitative results are obtained out of processing 2 hours of heterogeneous video content.
Compression performance of HEVC and its format range and screen content coding extensions
NASA Astrophysics Data System (ADS)
Li, Bin; Xu, Jizheng; Sullivan, Gary J.
2015-09-01
This paper presents a comparison-based test of the objective compression performance of the High Efficiency Video Coding (HEVC) standard, its format range extensions (RExt), and its draft screen content coding extensions (SCC). The current dominant standard, H.264/MPEG-4 AVC, is used as an anchor reference in the comparison. The conditions used for the comparison tests were designed to reflect relevant application scenarios and to enable a fair comparison to the maximum extent feasible - i.e., using comparable quantization settings, reference frame buffering, intra refresh periods, rate-distortion optimization decision processing, etc. It is noted that such PSNR-based objective comparisons generally provide more conservative estimates of HEVC benefit than are found in subjective studies. The experimental results show that, when compared with H.264/MPEG-4 AVC, HEVC version 1 provides a bit rate savings for equal PSNR of about 23% for all-intra coding, 34% for random access coding, and 38% for low-delay coding. This is consistent with prior studies and the general characterization that HEVC can provide about a bit rate savings of about 50% for equal subjective quality for most applications. The HEVC format range extensions provide a similar bit rate savings of about 13-25% for all-intra coding, 28-33% for random access coding, and 32-38% for low-delay coding at different bit rate ranges. For lossy coding of screen content, the HEVC screen content coding extensions achieve a bit rate savings of about 66%, 63%, and 61% for all-intra coding, random access coding, and low-delay coding, respectively. For lossless coding, the corresponding bit rate savings are about 40%, 33%, and 32%, respectively.
Coding tools investigation for next generation video coding based on HEVC
NASA Astrophysics Data System (ADS)
Chen, Jianle; Chen, Ying; Karczewicz, Marta; Li, Xiang; Liu, Hongbin; Zhang, Li; Zhao, Xin
2015-09-01
The new state-of-the-art video coding standard, H.265/HEVC, has been finalized in 2013 and it achieves roughly 50% bit rate saving compared to its predecessor, H.264/MPEG-4 AVC. This paper provides the evidence that there is still potential for further coding efficiency improvements. A brief overview of HEVC is firstly given in the paper. Then, our improvements on each main module of HEVC are presented. For instance, the recursive quadtree block structure is extended to support larger coding unit and transform unit. The motion information prediction scheme is improved by advanced temporal motion vector prediction, which inherits the motion information of each small block within a large block from a temporal reference picture. Cross component prediction with linear prediction model improves intra prediction and overlapped block motion compensation improves the efficiency of inter prediction. Furthermore, coding of both intra and inter prediction residual is improved by adaptive multiple transform technique. Finally, in addition to deblocking filter and SAO, adaptive loop filter is applied to further enhance the reconstructed picture quality. This paper describes above-mentioned techniques in detail and evaluates their coding performance benefits based on the common test condition during HEVC development. The simulation results show that significant performance improvement over HEVC standard can be achieved, especially for the high resolution video materials.
MPEG-1 low-cost encoder solution
NASA Astrophysics Data System (ADS)
Grueger, Klaus; Schirrmeister, Frank; Filor, Lutz; von Reventlow, Christian; Schneider, Ulrich; Mueller, Gerriet; Sefzik, Nicolai; Fiedrich, Sven
1995-02-01
A solution for real-time compression of digital YCRCB video data to an MPEG-1 video data stream has been developed. As an additional option, motion JPEG and video telephone streams (H.261) can be generated. For MPEG-1, up to two bidirectional predicted images are supported. The required computational power for motion estimation and DCT/IDCT, memory size and memory bandwidth have been the main challenges. The design uses fast-page-mode memory accesses and requires only one single 80 ns EDO-DRAM with 256 X 16 organization for video encoding. This can be achieved only by using adequate access and coding strategies. The architecture consists of an input processing and filter unit, a memory interface, a motion estimation unit, a motion compensation unit, a DCT unit, a quantization control, a VLC unit and a bus interface. For using the available memory bandwidth by the processing tasks, a fixed schedule for memory accesses has been applied, that can be interrupted for asynchronous events. The motion estimation unit implements a highly sophisticated hierarchical search strategy based on block matching. The DCT unit uses a separated fast-DCT flowgraph realized by a switchable hardware unit for both DCT and IDCT operation. By appropriate multiplexing, only one multiplier is required for: DCT, quantization, inverse quantization, and IDCT. The VLC unit generates the video-stream up to the video sequence layer and is directly coupled with an intelligent bus-interface. Thus, the assembly of video, audio and system data can easily be performed by the host computer. Having a relatively low complexity and only small requirements for DRAM circuits, the developed solution can be applied to low-cost encoding products for consumer electronics.
Embedded DCT and wavelet methods for fine granular scalable video: analysis and comparison
NASA Astrophysics Data System (ADS)
van der Schaar-Mitrea, Mihaela; Chen, Yingwei; Radha, Hayder
2000-04-01
Video transmission over bandwidth-varying networks is becoming increasingly important due to emerging applications such as streaming of video over the Internet. The fundamental obstacle in designing such systems resides in the varying characteristics of the Internet (i.e. bandwidth variations and packet-loss patterns). In MPEG-4, a new SNR scalability scheme, called Fine-Granular-Scalability (FGS), is currently under standardization, which is able to adapt in real-time (i.e. at transmission time) to Internet bandwidth variations. The FGS framework consists of a non-scalable motion-predicted base-layer and an intra-coded fine-granular scalable enhancement layer. For example, the base layer can be coded using a DCT-based MPEG-4 compliant, highly efficient video compression scheme. Subsequently, the difference between the original and decoded base-layer is computed, and the resulting FGS-residual signal is intra-frame coded with an embedded scalable coder. In order to achieve high coding efficiency when compressing the FGS enhancement layer, it is crucial to analyze the nature and characteristics of residual signals common to the SNR scalability framework (including FGS). In this paper, we present a thorough analysis of SNR residual signals by evaluating its statistical properties, compaction efficiency and frequency characteristics. The signal analysis revealed that the energy compaction of the DCT and wavelet transforms is limited and the frequency characteristic of SNR residual signals decay rather slowly. Moreover, the blockiness artifacts of the low bit-rate coded base-layer result in artificial high frequencies in the residual signal. Subsequently, a variety of wavelet and embedded DCT coding techniques applicable to the FGS framework are evaluated and their results are interpreted based on the identified signal properties. As expected from the theoretical signal analysis, the rate-distortion performances of the embedded wavelet and DCT-based coders are very similar. However, improved results can be obtained for the wavelet coder by deblocking the base- layer prior to the FGS residual computation. Based on the theoretical analysis and our measurements, we can conclude that for an optimal complexity versus coding-efficiency trade- off, only limited wavelet decomposition (e.g. 2 stages) needs to be performed for the FGS-residual signal. Also, it was observed that the good rate-distortion performance of a coding technique for a certain image type (e.g. natural still-images) does not necessarily translate into similarly good performance for signals with different visual characteristics and statistical properties.
MPEG-7 based video annotation and browsing
NASA Astrophysics Data System (ADS)
Hoeynck, Michael; Auweiler, Thorsten; Wellhausen, Jens
2003-11-01
The huge amount of multimedia data produced worldwide requires annotation in order to enable universal content access and to provide content-based search-and-retrieval functionalities. Since manual video annotation can be time consuming, automatic annotation systems are required. We review recent approaches to content-based indexing and annotation of videos for different kind of sports and describe our approach to automatic annotation of equestrian sports videos. We especially concentrate on MPEG-7 based feature extraction and content description, where we apply different visual descriptors for cut detection. Further, we extract the temporal positions of single obstacles on the course by analyzing MPEG-7 edge information. Having determined single shot positions as well as the visual highlights, the information is jointly stored with meta-textual information in an MPEG-7 description scheme. Based on this information, we generate content summaries which can be utilized in a user-interface in order to provide content-based access to the video stream, but further for media browsing on a streaming server.
VOP memory management in MPEG-4
NASA Astrophysics Data System (ADS)
Vaithianathan, Karthikeyan; Panchanathan, Sethuraman
2001-03-01
MPEG-4 is a multimedia standard that requires Video Object Planes (VOPs). Generation of VOPs for any kind of video sequence is still a challenging problem that largely remains unsolved. Nevertheless, if this problem is treated by imposing certain constraints, solutions for specific application domains can be found. MPEG-4 applications in mobile devices is one such domain where the opposite goals namely low power and high throughput are required to be met. Efficient memory management plays a major role in reducing the power consumption. Specifically, efficient memory management for VOPs is difficult because the lifetimes of these objects vary and these life times may be overlapping. Varying life times of the objects requires dynamic memory management where memory fragmentation is a key problem that needs to be addressed. In general, memory management systems address this problem by following a combination of strategy, policy and mechanism. For MPEG4 based mobile devices that lack instruction processors, a hardware based memory management solution is necessary. In MPEG4 based mobile devices that have a RISC processor, using a Real time operating system (RTOS) for this memory management task is not expected to be efficient because the strategies and policies used by the ROTS is often tuned for handling memory segments of smaller sizes compared to object sizes. Hence, a memory management scheme specifically tuned for VOPs is important. In this paper, different strategies, policies and mechanisms for memory management are considered and an efficient combination is proposed for the case of VOP memory management along with a hardware architecture, which can handle the proposed combination.
Characterization, adaptive traffic shaping, and multiplexing of real-time MPEG II video
NASA Astrophysics Data System (ADS)
Agrawal, Sanjay; Barry, Charles F.; Binnai, Vinay; Kazovsky, Leonid G.
1997-01-01
We obtain network traffic model for real-time MPEG-II encoded digital video by analyzing video stream samples from real-time encoders from NUKO Information Systems. MPEG-II sample streams include a resolution intensive movie, City of Joy, an action intensive movie, Aliens, a luminance intensive (black and white) movie, Road To Utopia, and a chrominance intensive (color) movie, Dick Tracy. From our analysis we obtain a heuristic model for the encoded video traffic which uses a 15-stage Markov process to model the I,B,P frame sequences within a group of pictures (GOP). A jointly-correlated Gaussian process is used to model the individual frame sizes. Scene change arrivals are modeled according to a gamma process. Simulations show that our MPEG-II traffic model generates, I,B,P frame sequences and frame sizes that closely match the sample MPEG-II stream traffic characteristics as they relate to latency and buffer occupancy in network queues. To achieve high multiplexing efficiency we propose a traffic shaping scheme which sets preferred 1-frame generation times among a group of encoders so as to minimize the overall variation in total offered traffic while still allowing the individual encoders to react to scene changes. Simulations show that our scheme results in multiplexing gains of up to 10% enabling us to multiplex twenty 6 Mbps MPEG-II video streams instead of 18 streams over an ATM/SONET OC3 link without latency or cell loss penalty. This scheme is due for a patent.
Impact of MPEG-4 3D mesh coding on watermarking algorithms for polygonal 3D meshes
NASA Astrophysics Data System (ADS)
Funk, Wolfgang
2004-06-01
The MPEG-4 multimedia standard addresses the scene-based composition of audiovisual objects. Natural and synthetic multimedia content can be mixed and transmitted over narrow and broadband communication channels. Synthetic natural hybrid coding (SNHC) within MPEG-4 provides tools for 3D mesh coding (3DMC). We investigate the robustness of two different 3D watermarking algorithms for polygonal meshes with respect to 3DMC. The first algorithm is a blind detection scheme designed for labelling applications that require high bandwidth and low robustness. The second algorithm is a robust non-blind one-bit watermarking scheme intended for copyright protection applications. Both algorithms have been proposed by Benedens. We expect 3DMC to have an impact on the watermarked 3D meshes, as the algorithms used for our simulations work on vertex coordinates to encode the watermark. We use the 3DMC implementation provided with the MPEG-4 reference software and the Princeton Shape Benchmark model database for our simulations. The watermarked models are sent through the 3DMC encoder and decoder, and the watermark decoding process is performed. For each algorithm under consideration we examine the detection properties as a function of the quantization of the vertex coordinates.
Doulamis, A D; Doulamis, N D; Kollias, S D
2003-01-01
Multimedia services and especially digital video is expected to be the major traffic component transmitted over communication networks [such as internet protocol (IP)-based networks]. For this reason, traffic characterization and modeling of such services are required for an efficient network operation. The generated models can be used as traffic rate predictors, during the network operation phase (online traffic modeling), or as video generators for estimating the network resources, during the network design phase (offline traffic modeling). In this paper, an adaptable neural-network architecture is proposed covering both cases. The scheme is based on an efficient recursive weight estimation algorithm, which adapts the network response to current conditions. In particular, the algorithm updates the network weights so that 1) the network output, after the adaptation, is approximately equal to current bit rates (current traffic statistics) and 2) a minimal degradation over the obtained network knowledge is provided. It can be shown that the proposed adaptable neural-network architecture simulates a recursive nonlinear autoregressive model (RNAR) similar to the notation used in the linear case. The algorithm presents low computational complexity and high efficiency in tracking traffic rates in contrast to conventional retraining schemes. Furthermore, for the problem of offline traffic modeling, a novel correlation mechanism is proposed for capturing the burstness of the actual MPEG video traffic. The performance of the model is evaluated using several real-life MPEG coded video sources of long duration and compared with other linear/nonlinear techniques used for both cases. The results indicate that the proposed adaptable neural-network architecture presents better performance than other examined techniques.
NASA Astrophysics Data System (ADS)
Lee, Seokhee; Lee, Kiyoung; Kim, Man Bae; Kim, JongWon
2005-11-01
In this paper, we propose a design of multi-view stereoscopic HD video transmission system based on MPEG-21 Digital Item Adaptation (DIA). It focuses on the compatibility and scalability to meet various user preferences and terminal capabilities. There exist a large variety of multi-view 3D HD video types according to the methods for acquisition, display, and processing. By following the MPEG-21 DIA framework, the multi-view stereoscopic HD video is adapted according to user feedback. A user can be served multi-view stereoscopic video which corresponds with his or her preferences and terminal capabilities. In our preliminary prototype, we verify that the proposed design can support two deferent types of display device (stereoscopic and auto-stereoscopic) and switching viewpoints between two available viewpoints.
New scene change control scheme based on pseudoskipped picture
NASA Astrophysics Data System (ADS)
Lee, Youngsun; Lee, Jinwhan; Chang, Hyunsik; Nam, Jae Y.
1997-01-01
A new scene change control scheme which improves the video coding performance for sequences that have many scene changed pictures is proposed in this paper. The scene changed pictures except intra-coded picture usually need more bits than normal pictures in order to maintain constant picture quality. The major idea of this paper is how to obtain extra bits which are needed to encode scene changed pictures. We encode a B picture which is located before a scene changed picture like a skipped picture. We call such a B picture as a pseudo-skipped picture. By generating the pseudo-skipped picture like a skipped picture. We call such a B picture as a pseudo-skipped picture. By generating the pseudo-skipped picture, we can save some bits and they are added to the originally allocated target bits to encode the scene changed picture. The simulation results show that the proposed algorithm improves encoding performance about 0.5 to approximately 2.0 dB of PSNR compared to MPEG-2 TM5 rate controls scheme. In addition, the suggested algorithm is compatible with MPEG-2 video syntax and the picture repetition is not recognizable.
Application of MPEG-7 descriptors for content-based indexing of sports videos
NASA Astrophysics Data System (ADS)
Hoeynck, Michael; Auweiler, Thorsten; Ohm, Jens-Rainer
2003-06-01
The amount of multimedia data available worldwide is increasing every day. There is a vital need to annotate multimedia data in order to allow universal content access and to provide content-based search-and-retrieval functionalities. Since supervised video annotation can be time consuming, an automatic solution is appreciated. We review recent approaches to content-based indexing and annotation of videos for different kind of sports, and present our application for the automatic annotation of equestrian sports videos. Thereby, we especially concentrate on MPEG-7 based feature extraction and content description. We apply different visual descriptors for cut detection. Further, we extract the temporal positions of single obstacles on the course by analyzing MPEG-7 edge information and taking specific domain knowledge into account. Having determined single shot positions as well as the visual highlights, the information is jointly stored together with additional textual information in an MPEG-7 description scheme. Using this information, we generate content summaries which can be utilized in a user front-end in order to provide content-based access to the video stream, but further content-based queries and navigation on a video-on-demand streaming server.
Layered video transmission over multirate DS-CDMA wireless systems
NASA Astrophysics Data System (ADS)
Kondi, Lisimachos P.; Srinivasan, Deepika; Pados, Dimitris A.; Batalama, Stella N.
2003-05-01
n this paper, we consider the transmission of video over wireless direct-sequence code-division multiple access (DS-CDMA) channels. A layered (scalable) video source codec is used and each layer is transmitted over a different CDMA channel. Spreading codes with different lengths are allowed for each CDMA channel (multirate CDMA). Thus, a different number of chips per bit can be used for the transmission of each scalable layer. For a given fixed energy value per chip and chip rate, the selection of a spreading code length affects the transmitted energy per bit and bit rate for each scalable layer. An MPEG-4 source encoder is used to provide a two-layer SNR scalable bitstream. Each of the two layers is channel-coded using Rate-Compatible Punctured Convolutional (RCPC) codes. Then, the data are interleaved, spread, carrier-modulated and transmitted over the wireless channel. A multipath Rayleigh fading channel is assumed. At the other end, we assume the presence of an antenna array receiver. After carrier demodulation, multiple-access-interference suppressing despreading is performed using space-time auxiliary vector (AV) filtering. The choice of the AV receiver is dictated by realistic channel fading rates that limit the data record available for receiver adaptation and redesign. Indeed, AV filter short-data-record estimators have been shown to exhibit superior bit-error-rate performance in comparison with LMS, RLS, SMI, or 'multistage nested Wiener' adaptive filter implementations. Our experimental results demonstrate the effectiveness of multirate DS-CDMA systems for wireless video transmission.
Empirical evaluation of H.265/HEVC-based dynamic adaptive video streaming over HTTP (HEVC-DASH)
NASA Astrophysics Data System (ADS)
Irondi, Iheanyi; Wang, Qi; Grecos, Christos
2014-05-01
Real-time HTTP streaming has gained global popularity for delivering video content over Internet. In particular, the recent MPEG-DASH (Dynamic Adaptive Streaming over HTTP) standard enables on-demand, live, and adaptive Internet streaming in response to network bandwidth fluctuations. Meanwhile, emerging is the new-generation video coding standard, H.265/HEVC (High Efficiency Video Coding) promises to reduce the bandwidth requirement by 50% at the same video quality when compared with the current H.264/AVC standard. However, little existing work has addressed the integration of the DASH and HEVC standards, let alone empirical performance evaluation of such systems. This paper presents an experimental HEVC-DASH system, which is a pull-based adaptive streaming solution that delivers HEVC-coded video content through conventional HTTP servers where the client switches to its desired quality, resolution or bitrate based on the available network bandwidth. Previous studies in DASH have focused on H.264/AVC, whereas we present an empirical evaluation of the HEVC-DASH system by implementing a real-world test bed, which consists of an Apache HTTP Server with GPAC, an MP4Client (GPAC) with open HEVC-based DASH client and a NETEM box in the middle emulating different network conditions. We investigate and analyze the performance of HEVC-DASH by exploring the impact of various network conditions such as packet loss, bandwidth and delay on video quality. Furthermore, we compare the Intra and Random Access profiles of HEVC coding with the Intra profile of H.264/AVC when the correspondingly encoded video is streamed with DASH. Finally, we explore the correlation among the quality metrics and network conditions, and empirically establish under which conditions the different codecs can provide satisfactory performance.
77 FR 47080 - Announcement of Requirements and Registration for “Stop Bullying Video Challenge”
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-07
... must upload their video to YouTube as private. To do this, select ``Privacy Settings,'' mark video as private, enter the YouTube username ``stopbullyinggov'' in the box that appears below, then select ``save changes.'' Videos must be uploaded to YouTube in one of the following file formats: WebM, MPEG4, 3GPP, MOV...
Depth assisted compression of full parallax light fields
NASA Astrophysics Data System (ADS)
Graziosi, Danillo B.; Alpaslan, Zahir Y.; El-Ghoroury, Hussein S.
2015-03-01
Full parallax light field displays require high pixel density and huge amounts of data. Compression is a necessary tool used by 3D display systems to cope with the high bandwidth requirements. One of the formats adopted by MPEG for 3D video coding standards is the use of multiple views with associated depth maps. Depth maps enable the coding of a reduced number of views, and are used by compression and synthesis software to reconstruct the light field. However, most of the developed coding and synthesis tools target linearly arranged cameras with small baselines. Here we propose to use the 3D video coding format for full parallax light field coding. We introduce a view selection method inspired by plenoptic sampling followed by transform-based view coding and view synthesis prediction to code residual views. We determine the minimal requirements for view sub-sampling and present the rate-distortion performance of our proposal. We also compare our method with established video compression techniques, such as H.264/AVC, H.264/MVC, and the new 3D video coding algorithm, 3DV-ATM. Our results show that our method not only has an improved rate-distortion performance, it also preserves the structure of the perceived light fields better.
Introduction to study and simulation of low rate video coding schemes
NASA Technical Reports Server (NTRS)
1992-01-01
During this period, the development of simulators for the various HDTV systems proposed to the FCC were developed. These simulators will be tested using test sequences from the MPEG committee. The results will be extrapolated to HDTV video sequences. Currently, the simulator for the compression aspects of the Advanced Digital Television (ADTV) was completed. Other HDTV proposals are at various stages of development. A brief overview of the ADTV system is given. Some coding results obtained using the simulator are discussed. These results are compared to those obtained using the CCITT H.261 standard. These results in the context of the CCSDS specifications are evaluated and some suggestions as to how the ADTV system could be implemented in the NASA network are made.
Video personalization for usage environment
NASA Astrophysics Data System (ADS)
Tseng, Belle L.; Lin, Ching-Yung; Smith, John R.
2002-07-01
A video personalization and summarization system is designed and implemented incorporating usage environment to dynamically generate a personalized video summary. The personalization system adopts the three-tier server-middleware-client architecture in order to select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. Our semantic metadata is provided through the use of the VideoAnnEx MPEG-7 Video Annotation Tool. When the user initiates a request for content, the client communicates the MPEG-21 usage environment description along with the user query to the middleware. The middleware is powered by the personalization engine and the content adaptation engine. Our personalization engine includes the VideoSue Summarization on Usage Environment engine that selects the optimal set of desired contents according to user preferences. Afterwards, the adaptation engine performs the required transformations and compositions of the selected contents for the specific usage environment using our VideoEd Editing and Composition Tool. Finally, two personalization and summarization systems are demonstrated for the IBM Websphere Portal Server and for the pervasive PDA devices.
Interactive MPEG-4 low-bit-rate speech/audio transmission over the Internet
NASA Astrophysics Data System (ADS)
Liu, Fang; Kim, JongWon; Kuo, C.-C. Jay
1999-11-01
The recently developed MPEG-4 technology enables the coding and transmission of natural and synthetic audio-visual data in the form of objects. In an effort to extend the object-based functionality of MPEG-4 to real-time Internet applications, architectural prototypes of multiplex layer and transport layer tailored for transmission of MPEG-4 data over IP are under debate among Internet Engineering Task Force (IETF), and MPEG-4 systems Ad Hoc group. In this paper, we present an architecture for interactive MPEG-4 speech/audio transmission system over the Internet. It utilities a framework of Real Time Streaming Protocol (RTSP) over Real-time Transport Protocol (RTP) to provide controlled, on-demand delivery of real time speech/audio data. Based on a client-server model, a couple of low bit-rate bit streams (real-time speech/audio, pre- encoded speech/audio) are multiplexed and transmitted via a single RTP channel to the receiver. The MPEG-4 Scene Description (SD) and Object Descriptor (OD) bit streams are securely sent through the RTSP control channel. Upon receiving, an initial MPEG-4 audio- visual scene is constructed after de-multiplexing, decoding of bit streams, and scene composition. A receiver is allowed to manipulate the initial audio-visual scene presentation locally, or interactively arrange scene changes by sending requests to the server. A server may also choose to update the client with new streams and list of contents for user selection.
Performance Evaluation of the NASA/KSC Transmission System
NASA Technical Reports Server (NTRS)
Christensen, Kenneth J.
2000-01-01
NASA-KSC currently uses three bridged 100-Mbps FDDI segments as its backbone for data traffic. The FDDI Transmission System (FTXS) connects the KSC industrial area, KSC launch complex 39 area, and the Cape Canaveral Air Force Station. The report presents a performance modeling study of the FTXS and the proposed ATM Transmission System (ATXS). The focus of the study is on performance of MPEG video transmission on these networks. Commercial modeling tools - the CACI Predictor and Comnet tools - were used. In addition, custom software tools were developed to characterize conversation pairs in Sniffer trace (capture) files to use as input to these tools. A baseline study of both non-launch and launch day data traffic on the FTXS is presented. MPEG-1 and MPEG-2 video traffic was characterized and the shaping of it evaluated. It is shown that the characteristics of a video stream has a direct effect on its performance in a network. It is also shown that shaping of video streams is necessary to prevent overflow losses and resulting poor video quality. The developed models can be used to predict when the existing FTXS will 'run out of room' and for optimizing the parameters of ATM links used for transmission of MPEG video. Future work with these models can provide useful input and validation to set-top box projects within the Advanced Networks Development group in NASA-KSC Development Engineering.
Sub-block motion derivation for merge mode in HEVC
NASA Astrophysics Data System (ADS)
Chien, Wei-Jung; Chen, Ying; Chen, Jianle; Zhang, Li; Karczewicz, Marta; Li, Xiang
2016-09-01
The new state-of-the-art video coding standard, H.265/HEVC, has been finalized in 2013 and it achieves roughly 50% bit rate saving compared to its predecessor, H.264/MPEG-4 AVC. In this paper, two additional merge candidates, advanced temporal motion vector predictor and spatial-temporal motion vector predictor, are developed to improve motion information prediction scheme under the HEVC structure. The proposed method allows each Prediction Unit (PU) to fetch multiple sets of motion information from multiple blocks smaller than the current PU. By splitting a large PU into sub-PUs and filling motion information for all the sub-PUs of the large PU, signaling cost of motion information could be reduced. This paper describes above-mentioned techniques in detail and evaluates their coding performance benefits based on the common test condition during HEVC development. Simulation results show that 2.4% performance improvement over HEVC can be achieved.
MPEG-7 audio-visual indexing test-bed for video retrieval
NASA Astrophysics Data System (ADS)
Gagnon, Langis; Foucher, Samuel; Gouaillier, Valerie; Brun, Christelle; Brousseau, Julie; Boulianne, Gilles; Osterrath, Frederic; Chapdelaine, Claude; Dutrisac, Julie; St-Onge, Francis; Champagne, Benoit; Lu, Xiaojian
2003-12-01
This paper reports on the development status of a Multimedia Asset Management (MAM) test-bed for content-based indexing and retrieval of audio-visual documents within the MPEG-7 standard. The project, called "MPEG-7 Audio-Visual Document Indexing System" (MADIS), specifically targets the indexing and retrieval of video shots and key frames from documentary film archives, based on audio-visual content like face recognition, motion activity, speech recognition and semantic clustering. The MPEG-7/XML encoding of the film database is done off-line. The description decomposition is based on a temporal decomposition into visual segments (shots), key frames and audio/speech sub-segments. The visible outcome will be a web site that allows video retrieval using a proprietary XQuery-based search engine and accessible to members at the Canadian National Film Board (NFB) Cineroute site. For example, end-user will be able to ask to point on movie shots in the database that have been produced in a specific year, that contain the face of a specific actor who tells a specific word and in which there is no motion activity. Video streaming is performed over the high bandwidth CA*net network deployed by CANARIE, a public Canadian Internet development organization.
Audiovisual signal compression: the 64/P codecs
NASA Astrophysics Data System (ADS)
Jayant, Nikil S.
1996-02-01
Video codecs operating at integral multiples of 64 kbps are well-known in visual communications technology as p * 64 systems (p equals 1 to 24). Originally developed as a class of ITU standards, these codecs have served as core technology for videoconferencing, and they have also influenced the MPEG standards for addressable video. Video compression in the above systems is provided by motion compensation followed by discrete cosine transform -- quantization of the residual signal. Notwithstanding the promise of higher bit rates in emerging generations of networks and storage devices, there is a continuing need for facile audiovisual communications over voice band and wireless modems. Consequently, video compression at bit rates lower than 64 kbps is a widely-sought capability. In particular, video codecs operating at rates in the neighborhood of 64, 32, 16, and 8 kbps seem to have great practical value, being matched respectively to the transmission capacities of basic rate ISDN (64 kbps), and voiceband modems that represent high (32 kbps), medium (16 kbps) and low- end (8 kbps) grades in current modem technology. The purpose of this talk is to describe the state of video technology at these transmission rates, without getting too literal about the specific speeds mentioned above. In other words, we expect codecs designed for non- submultiples of 64 kbps, such as 56 kbps or 19.2 kbps, as well as for sub-multiples of 64 kbps, depending on varying constraints on modem rate and the transmission rate needed for the voice-coding part of the audiovisual communications link. The MPEG-4 video standards process is a natural platform on which to examine current capabilities in sub-ISDN rate video coding, and we shall draw appropriately from this process in describing video codec performance. Inherent in this summary is a reinforcement of motion compensation and DCT as viable building blocks of video compression systems, although there is a need for improving signal quality even in the very best of these systems. In a related part of our talk, we discuss the role of preprocessing and postprocessing subsystems which serve to enhance the performance of an otherwise standard codec. Examples of these (sometimes proprietary) subsystems are automatic face-tracking prior to the coding of a head-and-shoulders scene, and adaptive postfiltering after conventional decoding, to reduce generic classes of artifacts in low bit rate video. The talk concludes with a summary of technology targets and research directions. We discuss targets in terms of four fundamental parameters of coder performance: quality, bit rate, delay and complexity; and we emphasize the need for measuring and maximizing the composite quality of the audiovisual signal. In discussing research directions, we examine progress and opportunities in two fundamental approaches for bit rate reduction: removal of statistical redundancy and reduction of perceptual irrelevancy; we speculate on the value of techniques such as analysis-by-synthesis that have proved to be quite valuable in speech coding, and we examine the prospect of integrating speech and image processing for developing next-generation technology for audiovisual communications.
Harris, Kevin M; Schum, Kevin R; Knickelbine, Thomas; Hurrell, David G; Koehler, Jodi L; Longe, Terrence F
2003-08-01
Motion Picture Experts Group-2 (MPEG2) is a broadcast industry standard that allows high-level compression of echocardiographic data. Validation of MPEG2 digital images compared with super VHS videotape has not been previously reported. Simultaneous super VHS videotape and MPEG2 digital images were acquired. In all, 4 experienced echocardiographers completed detailed reporting forms evaluating chamber size, ventricular function, regional wall-motion abnormalities, and measures of valvular regurgitation and stenosis in a blinded fashion. Comparisons between the 2 interpretations were then performed and intraobserver concordance was calculated for the various categories. A total of 80 paired comparisons were made. The overall concordance rate was 93.6% with most of the discrepancies being minor (4.1%). Concordance was 92.4% for left ventricle, 93.2% for right ventricle, 95.2% for regional wall-motion abnormalities, and 97.8% for valve stenosis. The mean grade of valvular regurgitation was similar for the 2 techniques. MPEG2 digital imaging offers excellent concordance compared with super VHS videotape.
NASA Astrophysics Data System (ADS)
Karczewicz, Marta; Chen, Peisong; Joshi, Rajan; Wang, Xianglin; Chien, Wei-Jung; Panchal, Rahul; Coban, Muhammed; Chong, In Suk; Reznik, Yuriy A.
2011-01-01
This paper describes video coding technology proposal submitted by Qualcomm Inc. in response to a joint call for proposal (CfP) issued by ITU-T SG16 Q.6 (VCEG) and ISO/IEC JTC1/SC29/WG11 (MPEG) in January 2010. Proposed video codec follows a hybrid coding approach based on temporal prediction, followed by transform, quantization, and entropy coding of the residual. Some of its key features are extended block sizes (up to 64x64), recursive integer transforms, single pass switched interpolation filters with offsets (single pass SIFO), mode dependent directional transform (MDDT) for intra-coding, luma and chroma high precision filtering, geometry motion partitioning, adaptive motion vector resolution. It also incorporates internal bit-depth increase (IBDI), and modified quadtree based adaptive loop filtering (QALF). Simulation results are presented for a variety of bit rates, resolutions and coding configurations to demonstrate the high compression efficiency achieved by the proposed video codec at moderate level of encoding and decoding complexity. For random access hierarchical B configuration (HierB), the proposed video codec achieves an average BD-rate reduction of 30.88c/o compared to the H.264/AVC alpha anchor. For low delay hierarchical P (HierP) configuration, the proposed video codec achieves an average BD-rate reduction of 32.96c/o and 48.57c/o, compared to the H.264/AVC beta and gamma anchors, respectively.
Compressed-domain video indexing techniques using DCT and motion vector information in MPEG video
NASA Astrophysics Data System (ADS)
Kobla, Vikrant; Doermann, David S.; Lin, King-Ip; Faloutsos, Christos
1997-01-01
Development of various multimedia applications hinges on the availability of fast and efficient storage, browsing, indexing, and retrieval techniques. Given that video is typically stored efficiently in a compressed format, if we can analyze the compressed representation directly, we can avoid the costly overhead of decompressing and operating at the pixel level. Compressed domain parsing of video has been presented in earlier work where a video clip is divided into shots, subshots, and scenes. In this paper, we describe key frame selection, feature extraction, and indexing and retrieval techniques that are directly applicable to MPEG compressed video. We develop a frame-type independent representation of the various types of frames present in an MPEG video in which al frames can be considered equivalent. Features are derived from the available DCT, macroblock, and motion vector information and mapped to a low-dimensional space where they can be accessed with standard database techniques. The spatial information is used as primary index while the temporal information is used to enhance the robustness of the system during the retrieval process. The techniques presented enable fast archiving, indexing, and retrieval of video. Our operational prototype typically takes a fraction of a second to retrieve similar video scenes from our database, with over 95% success.
Architecture design of motion estimation for ITU-T H.263
NASA Astrophysics Data System (ADS)
Ku, Chung-Wei; Lin, Gong-Sheng; Chen, Liang-Gee; Lee, Yung-Ping
1997-01-01
Digitalized video and audio system has become the trend of the progress in multimedia, because it provides great performance in quality and feasibility of processing. However, as the huge amount of information is needed while the bandwidth is limitted, data compression plays an important role in the system. Say, for a 176 x 144 monochromic sequence with 10 frames/sec frame rate, the bandwidth is about 2Mbps. This wastes much channel resource and limits the applications. MPEG (moving picttre ezpert groip) standardizes the video codec scheme, and it performs high compression ratio while providing good quality. MPEG-i is used for the frame size about 352 x 240 and 30 frames per second, and MPEG-2 provides scalibility and can be applied on scenes with higher definition, say HDTV (high definition television). On the other hand, some applications concerns the very low bit-rate, such as videophone and video-conferencing. Because the channel bandwidth is much limitted in telephone network, a very high compression ratio must be required. ITU-T announced the H.263 video coding standards to meet the above requirements.8 According to the simulation results of TMN-5,22 it outperforms 11.263 with little overhead of complexity. Since wireless communication is the trend in the near future, low power design of the video codec is an important issue for portable visual telephone. Motion estimation is the most computation consuming parts in the whole video codec. About 60% of the computation is spent on this parts for the encoder. Several architectures were proposed for efficient processing of block matching algorithms. In this paper, in order to meet the requirements of 11.263 and the expectation of low power consumption, a modified sandwich architecture in21 is proposed. Based on the parallel processing philosophy, low power is expected and the generation of either one motion vector or four motion vectors with half-pixel accuracy is achieved concurrently. In addition, we will present our solution how to solve the other addition modes in 11.263 with the proposed architecture.
NASA Astrophysics Data System (ADS)
Stewart, Brent K.; Carter, Stephen J.; Langer, Steven G.; Andrew, Rex K.
1998-06-01
Experiments using NASA's Advanced Communications Technology Satellite were conducted to provide an estimate of the compressed video quality required for preservation of clinically relevant features for the detection of trauma. Bandwidth rates of 128, 256 and 384 kbps were used. A five point Likert scale (1 equals no useful information and 5 equals good diagnostic quality) was used for a subjective preference questionnaire to evaluate the quality of the compressed ultrasound imagery at the three compression rates for several anatomical regions of interest. At 384 kbps the Likert scores (mean plus or minus SD) were abdomen (4.45 plus or minus 0.71), carotid artery (4.70 plus or minus 0.36), kidney (5.0 plus or minus 0.0), liver (4.67 plus or minus 0.58) and thyroid (4.03 plus or minus 0.74). Due to the volatile nature of the H.320 compressed digital video stream, no statistically significant results can be derived through this methodology. As the MPEG standard has at its roots many of the same intraframe and motion vector compression algorithms as the H.261 (such as that used in the previous ACTS/AMT experiments), we are using the MPEG compressed video sequences to best gauge what minimum bandwidths are necessary for preservation of clinically relevant features for the detection of trauma. We have been using an MPEG codec board to collect losslessly compressed video clips from high quality S- VHS tapes and through direct digitization of S-video. Due to the large number of videoclips and questions to be presented to the radiologists and for ease of application, we have developed a web browser interface for this video visual perception study. Due to the large numbers of observations required to reach statistical significance in most ROC studies, Kappa statistical analysis is used to analyze the degree of agreement between observers and between viewing assessment. If the degree of agreement amongst readers is high, then there is a possibility that the ratings (i.e., average Likert score at each bandwidth) do in fact reflect the dimension they are purported to reflect (video quality versus bandwidth). It is then possible to make intelligent choice of bandwidth for streaming compressed video and compressed videoclips.
Mapping of MPEG-4 decoding on a flexible architecture platform
NASA Astrophysics Data System (ADS)
van der Tol, Erik B.; Jaspers, Egbert G.
2001-12-01
In the field of consumer electronics, the advent of new features such as Internet, games, video conferencing, and mobile communication has triggered the convergence of television and computers technologies. This requires a generic media-processing platform that enables simultaneous execution of very diverse tasks such as high-throughput stream-oriented data processing and highly data-dependent irregular processing with complex control flows. As a representative application, this paper presents the mapping of a Main Visual profile MPEG-4 for High-Definition (HD) video onto a flexible architecture platform. A stepwise approach is taken, going from the decoder application toward an implementation proposal. First, the application is decomposed into separate tasks with self-contained functionality, clear interfaces, and distinct characteristics. Next, a hardware-software partitioning is derived by analyzing the characteristics of each task such as the amount of inherent parallelism, the throughput requirements, the complexity of control processing, and the reuse potential over different applications and different systems. Finally, a feasible implementation is proposed that includes amongst others a very-long-instruction-word (VLIW) media processor, one or more RISC processors, and some dedicated processors. The mapping study of the MPEG-4 decoder proves the flexibility and extensibility of the media-processing platform. This platform enables an effective HW/SW co-design yielding a high performance density.
Visual content highlighting via automatic extraction of embedded captions on MPEG compressed video
NASA Astrophysics Data System (ADS)
Yeo, Boon-Lock; Liu, Bede
1996-03-01
Embedded captions in TV programs such as news broadcasts, documentaries and coverage of sports events provide important information on the underlying events. In digital video libraries, such captions represent a highly condensed form of key information on the contents of the video. In this paper we propose a scheme to automatically detect the presence of captions embedded in video frames. The proposed method operates on reduced image sequences which are efficiently reconstructed from compressed MPEG video and thus does not require full frame decompression. The detection, extraction and analysis of embedded captions help to capture the highlights of visual contents in video documents for better organization of video, to present succinctly the important messages embedded in the images, and to facilitate browsing, searching and retrieval of relevant clips.
Drift-free MPEG-4 AVC semi-fragile watermarking
NASA Astrophysics Data System (ADS)
Hasnaoui, M.; Mitrea, M.
2014-02-01
While intra frame drifting is a concern for all types of MPEG-4 AVC compressed-domain video processing applications, it has a particular negative impact in watermarking. In order to avoid the drift drawbacks, two classes of solutions are currently considered in the literature. They try either to compensate the drift distortions at the expense of complex decoding/estimation algorithms or to restrict the insertion to the blocks which are not involved in the prediction, thus reducing the data payload. The present study follows a different approach. First, it algebraically models the drift distortion spread problem by considering the analytic expressions of the MPEG-4 AVC encoding operations. Secondly, it solves the underlying algebraic system under drift-free constraints. Finally, the advanced solution is adapted to take into account the watermarking peculiarities. The experiments consider an m-QIM semi-fragile watermarking method and a video surveillance corpus of 80 minutes. For prescribed data payload (100 bit/s), robustness (BER < 0.1 against transcoding at 50% in stream size), fragility (frame modification detection with accuracies of 1/81 from the frame size and 3s) and complexity constraints, the modified insertion results in gains in transparency of 2 dB in PSNR, of 0.4 in AAD, of 0.002 in IF, of 0.03 in SC, of 0.017 NCC and 22 in DVQ.
Cross-standard user description in mobile, medical oriented virtual collaborative environments
NASA Astrophysics Data System (ADS)
Ganji, Rama Rao; Mitrea, Mihai; Joveski, Bojan; Chammem, Afef
2015-03-01
By combining four different open standards belonging to the ISO/IEC JTC1/SC29 WG11 (a.k.a. MPEG) and W3C, this paper advances an architecture for mobile, medical oriented virtual collaborative environments. The various users are represented according to MPEG-UD (MPEG User Description) while the security issues are dealt with by deploying the WebID principles. On the server side, irrespective of their elementary types (text, image, video, 3D, …), the medical data are aggregated into hierarchical, interactive multimedia scenes which are alternatively represented into MPEG-4 BiFS or HTML5 standards. This way, each type of content can be optimally encoded according to its particular constraints (semantic, medical practice, network conditions, etc.). The mobile device should ensure only the displaying of the content (inside an MPEG player or an HTML5 browser) and the capturing of the user interaction. The overall architecture is implemented and tested under the framework of the MEDUSA European project, in partnership with medical institutions. The testbed considers a server emulated by a PC and heterogeneous user devices (tablets, smartphones, laptops) running under iOS, Android and Windows operating systems. The connection between the users and the server is alternatively ensured by WiFi and 3G/4G networks.
Interframe vector wavelet coding technique
NASA Astrophysics Data System (ADS)
Wus, John P.; Li, Weiping
1997-01-01
Wavelet coding is often used to divide an image into multi- resolution wavelet coefficients which are quantized and coded. By 'vectorizing' scalar wavelet coding and combining this with vector quantization (VQ), vector wavelet coding (VWC) can be implemented. Using a finite number of states, finite-state vector quantization (FSVQ) takes advantage of the similarity between frames by incorporating memory into the video coding system. Lattice VQ eliminates the potential mismatch that could occur using pre-trained VQ codebooks. It also eliminates the need for codebook storage in the VQ process, thereby creating a more robust coding system. Therefore, by using the VWC coding method in conjunction with the FSVQ system and lattice VQ, the formulation of a high quality very low bit rate coding systems is proposed. A coding system using a simple FSVQ system where the current state is determined by the previous channel symbol only is developed. To achieve a higher degree of compression, a tree-like FSVQ system is implemented. The groupings are done in this tree-like structure from the lower subbands to the higher subbands in order to exploit the nature of subband analysis in terms of the parent-child relationship. Class A and Class B video sequences from the MPEG-IV testing evaluations are used in the evaluation of this coding method.
Model-based video segmentation for vision-augmented interactive games
NASA Astrophysics Data System (ADS)
Liu, Lurng-Kuo
2000-04-01
This paper presents an architecture and algorithms for model based video object segmentation and its applications to vision augmented interactive game. We are especially interested in real time low cost vision based applications that can be implemented in software in a PC. We use different models for background and a player object. The object segmentation algorithm is performed in two different levels: pixel level and object level. At pixel level, the segmentation algorithm is formulated as a maximizing a posteriori probability (MAP) problem. The statistical likelihood of each pixel is calculated and used in the MAP problem. Object level segmentation is used to improve segmentation quality by utilizing the information about the spatial and temporal extent of the object. The concept of an active region, which is defined based on motion histogram and trajectory prediction, is introduced to indicate the possibility of a video object region for both background and foreground modeling. It also reduces the overall computation complexity. In contrast with other applications, the proposed video object segmentation system is able to create background and foreground models on the fly even without introductory background frames. Furthermore, we apply different rate of self-tuning on the scene model so that the system can adapt to the environment when there is a scene change. We applied the proposed video object segmentation algorithms to several prototype virtual interactive games. In our prototype vision augmented interactive games, a player can immerse himself/herself inside a game and can virtually interact with other animated characters in a real time manner without being constrained by helmets, gloves, special sensing devices, or background environment. The potential applications of the proposed algorithms including human computer gesture interface and object based video coding such as MPEG-4 video coding.
Improved lossless intra coding for H.264/MPEG-4 AVC.
Lee, Yung-Lyul; Han, Ki-Hun; Sullivan, Gary J
2006-09-01
A new lossless intra coding method based on sample-by-sample differential pulse code modulation (DPCM) is presented as an enhancement of the H.264/MPEG-4 AVC standard. The H.264/AVC design includes a multidirectional spatial prediction method to reduce spatial redundancy by using neighboring samples as a prediction for the samples in a block of data to be encoded. In the new lossless intra coding method, the spatial prediction is performed based on samplewise DPCM instead of in the block-based manner used in the current H.264/AVC standard, while the block structure is retained for the residual difference entropy coding process. We show that the new method, based on samplewise DPCM, does not have a major complexity penalty, despite its apparent pipeline dependencies. Experiments show that the new lossless intra coding method reduces the bit rate by approximately 12% in comparison with the lossless intra coding method previously included in the H.264/AVC standard. As a result, the new method is currently being adopted into the H.264/AVC standard in a new enhancement project.
SCTP as scalable video coding transport
NASA Astrophysics Data System (ADS)
Ortiz, Jordi; Graciá, Eduardo Martínez; Skarmeta, Antonio F.
2013-12-01
This study presents an evaluation of the Stream Transmission Control Protocol (SCTP) for the transport of the scalable video codec (SVC), proposed by MPEG as an extension to H.264/AVC. Both technologies fit together properly. On the one hand, SVC permits to split easily the bitstream into substreams carrying different video layers, each with different importance for the reconstruction of the complete video sequence at the receiver end. On the other hand, SCTP includes features, such as the multi-streaming and multi-homing capabilities, that permit to transport robustly and efficiently the SVC layers. Several transmission strategies supported on baseline SCTP and its concurrent multipath transfer (CMT) extension are compared with the classical solutions based on the Transmission Control Protocol (TCP) and the Realtime Transmission Protocol (RTP). Using ns-2 simulations, it is shown that CMT-SCTP outperforms TCP and RTP in error-prone networking environments. The comparison is established according to several performance measurements, including delay, throughput, packet loss, and peak signal-to-noise ratio of the received video.
Multimedia content management in MPEG-21 framework
NASA Astrophysics Data System (ADS)
Smith, John R.
2002-07-01
MPEG-21 is an emerging standard from MPEG that specifies a framework for transactions of multimedia content. MPEG-21 defines the fundamental concept known as a digital item, which is the unit of transaction in the multimedia framework. A digital item can be used to package content for such as a digital photograph, a video clip or movie, a musical recording with graphics and liner notes, a photo album, and so on. The packaging of the media resources, corresponding identifiers, and associated metadata is provided in the declaration of the digital item. The digital item declaration allows for more effective transaction, distribution, and management of multimedia content and corresponding metadata, rights expressions, variations of media resources. In this paper, we describe various challenges for multimedia content management in the MPEG-21 framework.
Identifying sports videos using replay, text, and camera motion features
NASA Astrophysics Data System (ADS)
Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.
1999-12-01
Automated classification of digital video is emerging as an important piece of the puzzle in the design of content management systems for digital libraries. The ability to classify videos into various classes such as sports, news, movies, or documentaries, increases the efficiency of indexing, browsing, and retrieval of video in large databases. In this paper, we discuss the extraction of features that enable identification of sports videos directly from the compressed domain of MPEG video. These features include detecting the presence of action replays, determining the amount of scene text in vide, and calculating various statistics on camera and/or object motion. The features are derived from the macroblock, motion,and bit-rate information that is readily accessible from MPEG video with very minimal decoding, leading to substantial gains in processing speeds. Full-decoding of selective frames is required only for text analysis. A decision tree classifier built using these features is able to identify sports clips with an accuracy of about 93 percent.
ATM Quality of Service Parameters at 45 Mbps Using a Satellite Emulator: Laboratory Measurements
NASA Technical Reports Server (NTRS)
Ivancic, William D.; Bobinsky, Eric A.
1997-01-01
Results of 45-Mbps DS3 intermediate-frequency loopback measurements of asynchronous transfer mode (ATM) quality of service parameters (cell error ratio and cell loss ratio) are presented. These tests, which were conducted at the NASA Lewis Research Center in support of satellite-ATM interoperability research, represent initial efforts to quantify the minimum parameters for stringent ATM applications, such as MPEG-1 and MPEG-2 video transmission. Portions of these results were originally presented to the International Telecommunications Union's ITU-R Working Party 4B in February 1996 in support of their Draft Preliminary Recommendation on the Transmission of ATM Traffic via Satellite.
Hierarchical video summarization based on context clustering
NASA Astrophysics Data System (ADS)
Tseng, Belle L.; Smith, John R.
2003-11-01
A personalized video summary is dynamically generated in our video personalization and summarization system based on user preference and usage environment. The three-tier personalization system adopts the server-middleware-client architecture in order to maintain, select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. In this paper, the metadata includes visual semantic annotations and automatic speech transcriptions. Our personalization and summarization engine in the middleware selects the optimal set of desired video segments by matching shot annotations and sentence transcripts with user preferences. Besides finding the desired contents, the objective is to present a coherent summary. There are diverse methods for creating summaries, and we focus on the challenges of generating a hierarchical video summary based on context information. In our summarization algorithm, three inputs are used to generate the hierarchical video summary output. These inputs are (1) MPEG-7 metadata descriptions of the contents in the server, (2) user preference and usage environment declarations from the user client, and (3) context information including MPEG-7 controlled term list and classification scheme. In a video sequence, descriptions and relevance scores are assigned to each shot. Based on these shot descriptions, context clustering is performed to collect consecutively similar shots to correspond to hierarchical scene representations. The context clustering is based on the available context information, and may be derived from domain knowledge or rules engines. Finally, the selection of structured video segments to generate the hierarchical summary efficiently balances between scene representation and shot selection.
Highly efficient simulation environment for HDTV video decoder in VLSI design
NASA Astrophysics Data System (ADS)
Mao, Xun; Wang, Wei; Gong, Huimin; He, Yan L.; Lou, Jian; Yu, Lu; Yao, Qingdong; Pirsch, Peter
2002-01-01
With the increase of the complex of VLSI such as the SoC (System on Chip) of MPEG-2 Video decoder with HDTV scalability especially, simulation and verification of the full design, even as high as the behavior level in HDL, often proves to be very slow, costly and it is difficult to perform full verification until late in the design process. Therefore, they become bottleneck of the procedure of HDTV video decoder design, and influence it's time-to-market mostly. In this paper, the architecture of Hardware/Software Interface of HDTV video decoder is studied, and a Hardware-Software Mixed Simulation (HSMS) platform is proposed to check and correct error in the early design stage, based on the algorithm of MPEG-2 video decoding. The application of HSMS to target system could be achieved by employing several introduced approaches. Those approaches speed up the simulation and verification task without decreasing performance.
Techniques for video compression
NASA Technical Reports Server (NTRS)
Wu, Chwan-Hwa
1995-01-01
In this report, we present our study on multiprocessor implementation of a MPEG2 encoding algorithm. First, we compare two approaches to implementing video standards, VLSI technology and multiprocessor processing, in terms of design complexity, applications, and cost. Then we evaluate the functional modules of MPEG2 encoding process in terms of their computation time. Two crucial modules are identified based on this evaluation. Then we present our experimental study on the multiprocessor implementation of the two crucial modules. Data partitioning is used for job assignment. Experimental results show that high speedup ratio and good scalability can be achieved by using this kind of job assignment strategy.
Visual acuity, contrast sensitivity, and range performance with compressed motion video
NASA Astrophysics Data System (ADS)
Bijl, Piet; de Vries, Sjoerd C.
2010-10-01
Video of visual acuity (VA) and contrast sensitivity (CS) test charts in a complex background was recorded using a CCD color camera mounted on a computer-controlled tripod and was fed into real-time MPEG-2 compression/decompression equipment. The test charts were based on the triangle orientation discrimination (TOD) test method and contained triangle test patterns of different sizes and contrasts in four possible orientations. In a perception experiment, observers judged the orientation of the triangles in order to determine VA and CS thresholds at the 75% correct level. Three camera velocities (0, 1.0, and 2.0 deg/s, or 0, 4.1, and 8.1 pixels/frame) and four compression rates (no compression, 4 Mb/s, 2 Mb/s, and 1 Mb/s) were used. VA is shown to be rather robust to any combination of motion and compression. CS, however, dramatically decreases when motion is combined with high compression ratios. The measured thresholds were fed into the TOD target acquisition model to predict the effect of motion and compression on acquisition ranges for tactical military vehicles. The effect of compression on static performance is limited but strong with motion video. The data suggest that with the MPEG2 algorithm, the emphasis is on the preservation of image detail at the cost of contrast loss.
Digital Video of Live-Scan Fingerprint Data
National Institute of Standards and Technology Data Gateway
NIST Digital Video of Live-Scan Fingerprint Data (PC database for purchase) NIST Special Database 24 contains MPEG-2 (Moving Picture Experts Group) compressed digital video of live-scan fingerprint data. The database is being distributed for use in developing and testing of fingerprint verification systems.
A contourlet transform based algorithm for real-time video encoding
NASA Astrophysics Data System (ADS)
Katsigiannis, Stamos; Papaioannou, Georgios; Maroulis, Dimitris
2012-06-01
In recent years, real-time video communication over the internet has been widely utilized for applications like video conferencing. Streaming live video over heterogeneous IP networks, including wireless networks, requires video coding algorithms that can support various levels of quality in order to adapt to the network end-to-end bandwidth and transmitter/receiver resources. In this work, a scalable video coding and compression algorithm based on the Contourlet Transform is proposed. The algorithm allows for multiple levels of detail, without re-encoding the video frames, by just dropping the encoded information referring to higher resolution than needed. Compression is achieved by means of lossy and lossless methods, as well as variable bit rate encoding schemes. Furthermore, due to the transformation utilized, it does not suffer from blocking artifacts that occur with many widely adopted compression algorithms. Another highly advantageous characteristic of the algorithm is the suppression of noise induced by low-quality sensors usually encountered in web-cameras, due to the manipulation of the transform coefficients at the compression stage. The proposed algorithm is designed to introduce minimal coding delay, thus achieving real-time performance. Performance is enhanced by utilizing the vast computational capabilities of modern GPUs, providing satisfactory encoding and decoding times at relatively low cost. These characteristics make this method suitable for applications like video-conferencing that demand real-time performance, along with the highest visual quality possible for each user. Through the presented performance and quality evaluation of the algorithm, experimental results show that the proposed algorithm achieves better or comparable visual quality relative to other compression and encoding methods tested, while maintaining a satisfactory compression ratio. Especially at low bitrates, it provides more human-eye friendly images compared to algorithms utilizing block-based coding, like the MPEG family, as it introduces fuzziness and blurring instead of artificial block artifacts.
Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les
2012-12-01
This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression.
Studies on image compression and image reconstruction
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Nori, Sekhar; Araj, A.
1994-01-01
During this six month period our works concentrated on three, somewhat different areas. We looked at and developed a number of error concealment schemes for use in a variety of video coding environments. This work is described in an accompanying (draft) Masters thesis. In the thesis we describe application of this techniques to the MPEG video coding scheme. We felt that the unique frame ordering approach used in the MPEG scheme would be a challenge to any error concealment/error recovery technique. We continued with our work in the vector quantization area. We have also developed a new type of vector quantizer, which we call a scan predictive vector quantization. The scan predictive VQ was tested on data processed at Goddard to approximate Landsat 7 HRMSI resolution and compared favorably with existing VQ techniques. A paper describing this work is included. The third area is concerned more with reconstruction than compression. While there is a variety of efficient lossless image compression schemes, they all have a common property that they use past data to encode future data. This is done either via taking differences, context modeling, or by building dictionaries. When encoding large images, this common property becomes a common flaw. When the user wishes to decode just a portion of the image, the requirement that the past history be available forces the decoding of a significantly larger portion of the image than desired by the user. Even with intelligent partitioning of the image dataset, the number of pixels decoded may be four times the number of pixels requested. We have developed an adaptive scanning strategy which can be used with any lossless compression scheme and which lowers the additional number of pixels to be decoded to about 7 percent of the number of pixels requested! A paper describing these results is included.
Objective assessment of MPEG-2 video quality
NASA Astrophysics Data System (ADS)
Gastaldo, Paolo; Zunino, Rodolfo; Rovetta, Stefano
2002-07-01
The increasing use of video compression standards in broadcasting television systems has required, in recent years, the development of video quality measurements that take into account artifacts specifically caused by digital compression techniques. In this paper we present a methodology for the objective quality assessment of MPEG video streams by using circular back-propagation feedforward neural networks. Mapping neural networks can render nonlinear relationships between objective features and subjective judgments, thus avoiding any simplifying assumption on the complexity of the model. The neural network processes an instantaneous set of input values, and yields an associated estimate of perceived quality. Therefore, the neural-network approach turns objective quality assessment into adaptive modeling of subjective perception. The objective features used for the estimate are chosen according to the assessed relevance to perceived quality and are continuously extracted in real time from compressed video streams. The overall system mimics perception but does not require any analytical model of the underlying physical phenomenon. The capability to process compressed video streams represents an important advantage over existing approaches, like avoiding the stream-decoding process greatly enhances real-time performance. Experimental results confirm that the system provides satisfactory, continuous-time approximations for actual scoring curves concerning real test videos.
NASA Astrophysics Data System (ADS)
Aghamaleki, Javad Abbasi; Behrad, Alireza
2018-01-01
Double compression detection is a crucial stage in digital image and video forensics. However, the detection of double compressed videos is challenging when the video forger uses the same quantization matrix and synchronized group of pictures (GOP) structure during the recompression history to conceal tampering effects. A passive approach is proposed for detecting double compressed MPEG videos with the same quantization matrix and synchronized GOP structure. To devise the proposed algorithm, the effects of recompression on P frames are mathematically studied. Then, based on the obtained guidelines, a feature vector is proposed to detect double compressed frames on the GOP level. Subsequently, sparse representations of the feature vectors are used for dimensionality reduction and enrich the traces of recompression. Finally, a support vector machine classifier is employed to detect and localize double compression in temporal domain. The experimental results show that the proposed algorithm achieves the accuracy of more than 95%. In addition, the comparisons of the results of the proposed method with those of other methods reveal the efficiency of the proposed algorithm.
Woodson, Kristina E; Sable, Craig A; Cross, Russell R; Pearson, Gail D; Martin, Gerard R
2004-11-01
Live transmission of echocardiograms over integrated services digital network lines is accurate and has led to improvements in the delivery of pediatric cardiology care. Permanent archiving of the live studies has not previously been reported. Specific obstacles to permanent storage of telemedicine files have included the ability to produce accurate images without a significant increase in storage requirements. We evaluated the accuracy of Motion Pictures Expert Group (MPEG) digitization of incoming video streams and assessed the storage requirements of these files for infants in a real-time pediatric tele-echocardiography program. All major cardiac diagnoses were correctly diagnosed by review of MPEG images. MPEG file size ranged from 11.1 to 182 MB (56.5 +/- 29.9 MB). MPEG digitization during live neonatal telemedicine is accurate and provides an efficient method for storage. This modality has acceptable storage requirements; file sizes are comparable to other digital modalities.
Kim, Dong-Sun; Kwon, Jin-San
2014-01-01
Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor. PMID:25237900
A novel method for efficient archiving and retrieval of biomedical images using MPEG-7
NASA Astrophysics Data System (ADS)
Meyer, Joerg; Pahwa, Ash
2004-10-01
Digital archiving and efficient retrieval of radiological scans have become critical steps in contemporary medical diagnostics. Since more and more images and image sequences (single scans or video) from various modalities (CT/MRI/PET/digital X-ray) are now available in digital formats (e.g., DICOM-3), hospitals and radiology clinics need to implement efficient protocols capable of managing the enormous amounts of data generated daily in a typical clinical routine. We present a method that appears to be a viable way to eliminate the tedious step of manually annotating image and video material for database indexing. MPEG-7 is a new framework that standardizes the way images are characterized in terms of color, shape, and other abstract, content-related criteria. A set of standardized descriptors that are automatically generated from an image is used to compare an image to other images in a database, and to compute the distance between two images for a given application domain. Text-based database queries can be replaced with image-based queries using MPEG-7. Consequently, image queries can be conducted without any prior knowledge of the keys that were used as indices in the database. Since the decoding and matching steps are not part of the MPEG-7 standard, this method also enables searches that were not planned by the time the keys were generated.
Visually lossless compression of digital hologram sequences
NASA Astrophysics Data System (ADS)
Darakis, Emmanouil; Kowiel, Marcin; Näsänen, Risto; Naughton, Thomas J.
2010-01-01
Digital hologram sequences have great potential for the recording of 3D scenes of moving macroscopic objects as their numerical reconstruction can yield a range of perspective views of the scene. Digital holograms inherently have large information content and lossless coding of holographic data is rather inefficient due to the speckled nature of the interference fringes they contain. Lossy coding of still holograms and hologram sequences has shown promising results. By definition, lossy compression introduces errors in the reconstruction. In all of the previous studies, numerical metrics were used to measure the compression error and through it, the coding quality. Digital hologram reconstructions are highly speckled and the speckle pattern is very sensitive to data changes. Hence, numerical quality metrics can be misleading. For example, for low compression ratios, a numerically significant coding error can have visually negligible effects. Yet, in several cases, it is of high interest to know how much lossy compression can be achieved, while maintaining the reconstruction quality at visually lossless levels. Using an experimental threshold estimation method, the staircase algorithm, we determined the highest compression ratio that was not perceptible to human observers for objects compressed with Dirac and MPEG-4 compression methods. This level of compression can be regarded as the point below which compression is perceptually lossless although physically the compression is lossy. It was found that up to 4 to 7.5 fold compression can be obtained with the above methods without any perceptible change in the appearance of video sequences.
Content-based video retrieval by example video clip
NASA Astrophysics Data System (ADS)
Dimitrova, Nevenka; Abdel-Mottaleb, Mohamed
1997-01-01
This paper presents a novel approach for video retrieval from a large archive of MPEG or Motion JPEG compressed video clips. We introduce a retrieval algorithm that takes a video clip as a query and searches the database for clips with similar contents. Video clips are characterized by a sequence of representative frame signatures, which are constructed from DC coefficients and motion information (`DC+M' signatures). The similarity between two video clips is determined by using their respective signatures. This method facilitates retrieval of clips for the purpose of video editing, broadcast news retrieval, or copyright violation detection.
NASA Astrophysics Data System (ADS)
Lopez, Alejandro; Noe, Miquel; Fernandez, Gabriel
2004-10-01
The GMF4iTV project (Generic Media Framework for Interactive Television) is an IST European project that consists of an end-to-end broadcasting platform providing interactivity on heterogeneous multimedia devices such as Set-Top-Boxes and PCs according to the Multimedia Home Platform (MHP) standard from DVB. This platform allows the content providers to create enhanced audiovisual contents with a degree of interactivity at moving object level or shot change from a video. The end user is then able to interact with moving objects from the video or individual shots allowing the enjoyment of additional contents associated to them (MHP applications, HTML pages, JPEG, MPEG4 files...). This paper focus the attention to the issues related to metadata and content transmission, synchronization, signaling and bitrate allocation of the GMF4iTV project.
Studies and simulations of the DigiCipher system
NASA Technical Reports Server (NTRS)
Sayood, K.; Chen, Y. C.; Kipp, G.
1993-01-01
During this period the development of simulators for the various high definition television (HDTV) systems proposed to the FCC was continued. The FCC has indicated that it wants the various proposers to collaborate on a single system. Based on all available information this system will look very much like the advanced digital television (ADTV) system with major contributions only from the DigiCipher system. The results of our simulations of the DigiCipher system are described. This simulator was tested using test sequences from the MPEG committee. The results are extrapolated to HDTV video sequences. Once again, some caveats are in order. The sequences used for testing the simulator and generating the results are those used for testing the MPEG algorithm. The sequences are of much lower resolution than the HDTV sequences would be, and therefore the extrapolations are not totally accurate. One would expect to get significantly higher compression in terms of bits per pixel with sequences that are of higher resolution. However, the simulator itself is a valid one, and should HDTV sequences become available, they could be used directly with the simulator. A brief overview of the DigiCipher system is given. Some coding results obtained using the simulator are looked at. These results are compared to those obtained using the ADTV system. These results are evaluated in the context of the CCSDS specifications and make some suggestions as to how the DigiCipher system could be implemented in the NASA network. Simulations such as the ones reported can be biased depending on the particular source sequence used. In order to get more complete information about the system one needs to obtain a reasonable set of models which mirror the various kinds of sources encountered during video coding. A set of models which can be used to effectively model the various possible scenarios is provided. As this is somewhat tangential to the other work reported, the results are included as an appendix.
Real-time video compressing under DSP/BIOS
NASA Astrophysics Data System (ADS)
Chen, Qiu-ping; Li, Gui-ju
2009-10-01
This paper presents real-time MPEG-4 Simple Profile video compressing based on the DSP processor. The programming framework of video compressing is constructed using TMS320C6416 Microprocessor, TDS510 simulator and PC. It uses embedded real-time operating system DSP/BIOS and the API functions to build periodic function, tasks and interruptions etcs. Realize real-time video compressing. To the questions of data transferring among the system. Based on the architecture of the C64x DSP, utilized double buffer switched and EDMA data transfer controller to transit data from external memory to internal, and realize data transition and processing at the same time; the architecture level optimizations are used to improve software pipeline. The system used DSP/BIOS to realize multi-thread scheduling. The whole system realizes high speed transition of a great deal of data. Experimental results show the encoder can realize real-time encoding of 768*576, 25 frame/s video images.
Indexing and retrieval of MPEG compressed video
NASA Astrophysics Data System (ADS)
Kobla, Vikrant; Doermann, David S.
1998-04-01
To keep pace with the increased popularity of digital video as an archival medium, the development of techniques for fast and efficient analysis of ideo streams is essential. In particular, solutions to the problems of storing, indexing, browsing, and retrieving video data from large multimedia databases are necessary to a low access to these collections. Given that video is often stored efficiently in a compressed format, the costly overhead of decompression can be reduced by analyzing the compressed representation directly. In earlier work, we presented compressed domain parsing techniques which identified shots, subshots, and scenes. In this article, we present efficient key frame selection, feature extraction, indexing, and retrieval techniques that are directly applicable to MPEG compressed video. We develop a frame type independent representation which normalizes spatial and temporal features including frame type, frame size, macroblock encoding, and motion compensation vectors. Features for indexing are derived directly from this representation and mapped to a low- dimensional space where they can be accessed using standard database techniques. Spatial information is used as primary index into the database and temporal information is used to rank retrieved clips and enhance the robustness of the system. The techniques presented enable efficient indexing, querying, and retrieval of compressed video as demonstrated by our system which typically takes a fraction of a second to retrieve similar video scenes from a database, with over 95 percent recall.
NASA Technical Reports Server (NTRS)
1996-01-01
Optivision developed two PC-compatible boards and associated software under a Goddard Space Flight Center Small Business Innovation Research grant for NASA applications in areas such as telerobotics, telesciences and spaceborne experimentation. From this technology, the company used its own funds to develop commercial products, the OPTIVideo MPEG Encoder and Decoder, which are used for realtime video compression and decompression. They are used in commercial applications including interactive video databases and video transmission. The encoder converts video source material to a compressed digital form that can be stored or transmitted, and the decoder decompresses bit streams to provide high quality playback.
The experiments and analysis of several selective video encryption methods
NASA Astrophysics Data System (ADS)
Zhang, Yue; Yang, Cheng; Wang, Lei
2013-07-01
This paper presents four methods for selective video encryption based on the MPEG-2 video compression,including the slices, the I-frames, the motion vectors, and the DCT coefficients. We use the AES encryption method for simulation experiment for the four methods on VS2010 Platform, and compare the video effects and the processing speed of each frame after the video encrypted. The encryption depth can be arbitrarily selected, and design the encryption depth by using the double limit counting method, so the accuracy can be increased.
System on a chip with MPEG-4 capability
NASA Astrophysics Data System (ADS)
Yassa, Fathy; Schonfeld, Dan
2002-12-01
Current products supporting video communication applications rely on existing computer architectures. RISC processors have been used successfully in numerous applications over several decades. DSP processors have become ubiquitous in signal processing and communication applications. Real-time applications such as speech processing in cellular telephony rely extensively on the computational power of these processors. Video processors designed to implement the computationally intensive codec operations have also been used to address the high demands of video communication applications (e.g., cable set-top boxes and DVDs). This paper presents an overview of a system-on-chip (SOC) architecture used for real-time video in wireless communication applications. The SOC specifications answer to the system requirements imposed by the application environment. A CAM-based video processor is used to accelerate data intensive video compression tasks such as motion estimations and filtering. Other components are dedicated to system level data processing and audio processing. A rich set of I/Os allows the SOC to communicate with other system components such as baseband and memory subsystems.
Indexing method of digital audiovisual medical resources with semantic Web integration.
Cuggia, Marc; Mougin, Fleur; Le Beux, Pierre
2005-03-01
Digitalization of audiovisual resources and network capability offer many possibilities which are the subject of intensive work in scientific and industrial sectors. Indexing such resources is a major challenge. Recently, the Motion Pictures Expert Group (MPEG) has developed MPEG-7, a standard for describing multimedia content. The goal of this standard is to develop a rich set of standardized tools to enable efficient retrieval from digital archives or the filtering of audiovisual broadcasts on the Internet. How could this kind of technology be used in the medical context? In this paper, we propose a simpler indexing system, based on the Dublin Core standard and compliant to MPEG-7. We use MeSH and the UMLS to introduce conceptual navigation. We also present a video-platform which enables encoding and gives access to audiovisual resources in streaming mode.
Yoo, Sun K; Kim, D K; Jung, S M; Kim, E-K; Lim, J S; Kim, J H
2004-01-01
A Web-based, realtime, tele-ultrasound consultation system was designed. The system employed ActiveX control, MPEG-4 coding of full-resolution ultrasound video (640 x 480 pixels at 30 frames/s) and H.320 videoconferencing. It could be used via a Web browser. The system was evaluated over three types of commercial line: a cable connection, ADSL and VDSL. Three radiologists assessed the quality of compressed and uncompressed ultrasound video-sequences from 16 cases (10 abnormal livers, four abnormal kidneys and two abnormal gallbladders). The radiologists' scores showed that, at a given frame rate, increasing the bit rate was associated with increasing quality; however, at a certain threshold bit rate the quality did not increase significantly. The peak signal to noise ratio (PSNR) was also measured between the compressed and uncompressed images. In most cases, the PSNR increased as the bit rate increased, and increased as the number of dropped frames increased. There was a threshold bit rate, at a given frame rate, at which the PSNR did not improve significantly. Taking into account both sets of threshold values, a bit rate of more than 0.6 Mbit/s, at 30 frames/s, is suggested as the threshold for the maintenance of diagnostic image quality.
An optimal adder-based hardware architecture for the DCT/SA-DCT
NASA Astrophysics Data System (ADS)
Kinane, Andrew; Muresan, Valentin; O'Connor, Noel
2005-07-01
The explosive growth of the mobile multimedia industry has accentuated the need for ecient VLSI implemen- tations of the associated computationally demanding signal processing algorithms. This need becomes greater as end-users demand increasingly enhanced features and more advanced underpinning video analysis. One such feature is object-based video processing as supported by MPEG-4 core profile, which allows content-based in- teractivity. MPEG-4 has many computationally demanding underlying algorithms, an example of which is the Shape Adaptive Discrete Cosine Transform (SA-DCT). The dynamic nature of the SA-DCT processing steps pose significant VLSI implementation challenges and many of the previously proposed approaches use area and power consumptive multipliers. Most also ignore the subtleties of the packing steps and manipulation of the shape information. We propose a new multiplier-less serial datapath based solely on adders and multiplexers to improve area and power. The adder cost is minimised by employing resource re-use methods. The number of (physical) adders used has been derived using a common sub-expression elimination algorithm. Additional energy eciency is factored into the design by employing guarded evaluation and local clock gating. Our design implements the SA-DCT packing with minimal switching using ecient addressing logic with a transpose mem- ory RAM. The entire design has been synthesized using TSMC 0.09µm TCBN90LP technology yielding a gate count of 12028 for the datapath and its control logic.
Fabrication of MPEG-b-PMAA capped YVO4:Eu nanoparticles with biocompatibility for cell imaging.
Liu, Yue; Li, Xiao-Shuang; Hu, Jia; Guo, Miao; Liu, Wei-Jun; Feng, Yi-Mei; Xie, Jing-Ran; Du, Gui-Xiang
2015-12-01
A novel nanoparticle with multilayer core-shell architecture for cell imaging is designed and synthesized by coating a fluorescent YVO4:Eu core with a diblock copolymer, MPEG-b-PMAA. The synthesis of YVO4:Eu core, which further makes MPEG-b-PMAA-YVO4:Eu NPs adapt for cell imaging, is guided by the model determined upon the evaluation of pH and CEu%. The PMAA block attached tightly on the YVO4:Eu core forms the inner shell and the MPEG block forms the biocompatible outermost shell. Factors including reaction time, reaction temperature, CEu% and pH are optimized for the preparation of the YVO4:Eu NPs. A precise defined model is established according to analyzing the coefficients of pH and CEu% during the synthesis. The MPEG-b-PMAA-YVO4:Eu NPs, with an average diameter of 24 nm, have a tetragonal structure and demonstrate luminescence in the red region, which lies in a biological window (optical imaging). Significant enhancement in luminescence intensity by MPEG-b-PMAA-YVO4:Eu NPs formation is observed. The capping copolymer MPEG-b-PMAA improves the dispersibility of hydrophobic YVO4:Eu NPs in water, making the NPs stable under different conditions. In addition, the biocompatibility MPEG layer reduces the cytotoxicity of the nanoparticles effectively. 95% cell viability can be achieved at the NPs concentration of 800 mgL(-1) after 24h of culture. Cellular uptake of the MPEG-b-PMAA-YVO4:Eu NPs is evaluated by cell imaging assay, indicating that the NPs can be taken up rapidly and largely by cancerous or non-cancerous cells through an endocytosis mechanism. Copyright © 2015 Elsevier B.V. All rights reserved.
Experimental service of 3DTV broadcasting relay in Korea
NASA Astrophysics Data System (ADS)
Hur, Namho; Ahn, Chung-Hyun; Ahn, Chieteuk
2002-11-01
This paper introduces 3D HDTV relay broadcasting experiments of 2002 FIFA World Cup Korea/Japan using a terrestrial and satellite network. We have developed 3D HDTV cameras, 3D HDTV video multiplexer/demultiplexer, a 3D HDTV receiver, and a 3D HDTV OB van for field productions. By using a terrestrial and satellite network, we distributed a compressed 3D HDTV signal to predetermined demonstration venues which are approved by host broadcast services (HBS), KirchMedia, and FIFA. In this case, we transmitted a 40Mbps MPEG-2 transport stream (DVB-ASI) over a DS-3 network specified in ITU-T Rec. G.703. The video/audio compression formats are MPEG-2 main-profile, high-level and Dolby Digital AC-3 respectively. Then at venues, the recovered left and right images by the 3D HDTV receiver are displayed on a screen with polarized beam projectors.
Indexing method of digital audiovisual medical resources with semantic Web integration.
Cuggia, Marc; Mougin, Fleur; Le Beux, Pierre
2003-01-01
Digitalization of audio-visual resources combined with the performances of the networks offer many possibilities which are the subject of intensive work in the scientific and industrial sectors. Indexing such resources is a major challenge. Recently, the Motion Pictures Expert Group (MPEG) has been developing MPEG-7, a standard for describing multimedia content. The good of this standard is to develop a rich set of standardized tools to enable fast efficient retrieval from digital archives or filtering audiovisual broadcasts on the internet. How this kind of technologies could be used in the medical context? In this paper, we propose a simpler indexing system, based on Dublin Core standard and complaint to MPEG-7. We use MeSH and UMLS to introduce conceptual navigation. We also present a video-platform with enables to encode and give access to audio-visual resources in streaming mode.
... HEADS UP Resources Training Custom PDFs Mobile Apps Videos Graphics Podcasts Social Media File Formats Help: How do I view different file formats (PDF, DOC, PPT, MPEG) on this site? Adobe PDF file Microsoft PowerPoint ... file Apple Quicktime file RealPlayer file Text file ...
Video segmentation and camera motion characterization using compressed data
NASA Astrophysics Data System (ADS)
Milanese, Ruggero; Deguillaume, Frederic; Jacot-Descombes, Alain
1997-10-01
We address the problem of automatically extracting visual indexes from videos, in order to provide sophisticated access methods to the contents of a video server. We focus on tow tasks, namely the decomposition of a video clip into uniform segments, and the characterization of each shot by camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analyzing motion vectors. For the second task a least- squares fitting procedure determines the pan/tilt/zoom camera parameters. In order to guarantee the highest processing speed, all techniques process and analyze directly MPEG-1 motion vectors, without need for video decompression. Experimental results are reported for a database of news video clips.
Moving to continuous facial expression space using the MPEG-4 facial definition parameter (FDP) set
NASA Astrophysics Data System (ADS)
Karpouzis, Kostas; Tsapatsoulis, Nicolas; Kollias, Stefanos D.
2000-06-01
Research in facial expression has concluded that at least six emotions, conveyed by human faces, are universally associated with distinct expressions. Sadness, anger, joy, fear, disgust and surprise are categories of expressions that are recognizable across cultures. In this work we form a relation between the description of the universal expressions and the MPEG-4 Facial Definition Parameter Set (FDP). We also investigate the relation between the movement of basic FDPs and the parameters that describe emotion-related words according to some classical psychological studies. In particular Whissel suggested that emotions are points in a space, which seem to occupy two dimensions: activation and evaluation. We show that some of the MPEG-4 Facial Animation Parameters (FAPs), approximated by the motion of the corresponding FDPs, can be combined by means of a fuzzy rule system to estimate the activation parameter. In this way variations of the six archetypal emotions can be achieved. Moreover, Plutchik concluded that emotion terms are unevenly distributed through the space defined by dimensions like Whissel's; instead they tend to form an approximately circular pattern, called 'emotion wheel,' modeled using an angular measure. The 'emotion wheel' can be defined as a reference for creating intermediate expressions from the universal ones, by interpolating the movement of dominant FDP points between neighboring basic expressions. By exploiting the relation between the movement of the basic FDP point and the activation and angular parameters we can model more emotions than the primary ones and achieve efficient recognition in video sequences.
High performance MPEG-audio decoder IC
NASA Technical Reports Server (NTRS)
Thorn, M.; Benbassat, G.; Cyr, K.; Li, S.; Gill, M.; Kam, D.; Walker, K.; Look, P.; Eldridge, C.; Ng, P.
1993-01-01
The emerging digital audio and video compression technology brings both an opportunity and a new challenge to IC design. The pervasive application of compression technology to consumer electronics will require high volume, low cost IC's and fast time to market of the prototypes and production units. At the same time, the algorithms used in the compression technology result in complex VLSI IC's. The conflicting challenges of algorithm complexity, low cost, and fast time to market have an impact on device architecture and design methodology. The work presented in this paper is about the design of a dedicated, high precision, Motion Picture Expert Group (MPEG) audio decoder.
Free viewpoint TV and its international standardization
NASA Astrophysics Data System (ADS)
Tanimoto, Masayuki
2009-05-01
We have developed a new type of television named FTV (Free-viewpoint TV). FTV is an innovative visual media that enables us to view a 3D scene by freely changing our viewpoints. We proposed the concept of FTV and constructed the world's first real-time system including the complete chain of operation from image capture to display. We also realized FTV on a single PC and FTV with free listening-point audio. FTV is based on the ray-space method that represents one ray in real space with one point in the ray-space. We have also developed new type of ray capture and display technologies such as a 360-degree mirror-scan ray capturing system and a 360 degree ray-reproducing display. MPEG regarded FTV as the most challenging 3D media and started the international standardization activities of FTV. The first phase of FTV is MVC (Multi-view Video Coding) and the second phase is 3DV (3D Video). MVC was completed in March 2009. 3DV is a standard that targets serving a variety of 3D displays. It will be completed within the next two years.
Benchmarking multimedia performance
NASA Astrophysics Data System (ADS)
Zandi, Ahmad; Sudharsanan, Subramania I.
1998-03-01
With the introduction of faster processors and special instruction sets tailored to multimedia, a number of exciting applications are now feasible on the desktops. Among these is the DVD playback consisting, among other things, of MPEG-2 video and Dolby digital audio or MPEG-2 audio. Other multimedia applications such as video conferencing and speech recognition are also becoming popular on computer systems. In view of this tremendous interest in multimedia, a group of major computer companies have formed, Multimedia Benchmarks Committee as part of Standard Performance Evaluation Corp. to address the performance issues of multimedia applications. The approach is multi-tiered with three tiers of fidelity from minimal to full compliant. In each case the fidelity of the bitstream reconstruction as well as quality of the video or audio output are measured and the system is classified accordingly. At the next step the performance of the system is measured. In many multimedia applications such as the DVD playback the application needs to be run at a specific rate. In this case the measurement of the excess processing power, makes all the difference. All these make a system level, application based, multimedia benchmark very challenging. Several ideas and methodologies for each aspect of the problems will be presented and analyzed.
A real-time MPEG software decoder using a portable message-passing library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwong, Man Kam; Tang, P.T. Peter; Lin, Biquan
1995-12-31
We present a real-time MPEG software decoder that uses message-passing libraries such as MPL, p4 and MPI. The parallel MPEG decoder currently runs on the IBM SP system but can be easil ported to other parallel machines. This paper discusses our parallel MPEG decoding algorithm as well as the parallel programming environment under which it uses. Several technical issues are discussed, including balancing of decoding speed, memory limitation, 1/0 capacities, and optimization of MPEG decoding components. This project shows that a real-time portable software MPEG decoder is feasible in a general-purpose parallel machine.
Modeling of video compression effects on target acquisition performance
NASA Astrophysics Data System (ADS)
Cha, Jae H.; Preece, Bradley; Espinola, Richard L.
2009-05-01
The effect of video compression on image quality was investigated from the perspective of target acquisition performance modeling. Human perception tests were conducted recently at the U.S. Army RDECOM CERDEC NVESD, measuring identification (ID) performance on simulated military vehicle targets at various ranges. These videos were compressed with different quality and/or quantization levels utilizing motion JPEG, motion JPEG2000, and MPEG-4 encoding. To model the degradation on task performance, the loss in image quality is fit to an equivalent Gaussian MTF scaled by the Structural Similarity Image Metric (SSIM). Residual compression artifacts are treated as 3-D spatio-temporal noise. This 3-D noise is found by taking the difference of the uncompressed frame, with the estimated equivalent blur applied, and the corresponding compressed frame. Results show good agreement between the experimental data and the model prediction. This method has led to a predictive performance model for video compression by correlating various compression levels to particular blur and noise input parameters for NVESD target acquisition performance model suite.
Idbeaa, Tarik; Abdul Samad, Salina; Husain, Hafizah
2016-01-01
This paper presents a novel secure and robust steganographic technique in the compressed video domain namely embedding-based byte differencing (EBBD). Unlike most of the current video steganographic techniques which take into account only the intra frames for data embedding, the proposed EBBD technique aims to hide information in both intra and inter frames. The information is embedded into a compressed video by simultaneously manipulating the quantized AC coefficients (AC-QTCs) of luminance components of the frames during MPEG-2 encoding process. Later, during the decoding process, the embedded information can be detected and extracted completely. Furthermore, the EBBD basically deals with two security concepts: data encryption and data concealing. Hence, during the embedding process, secret data is encrypted using the simplified data encryption standard (S-DES) algorithm to provide better security to the implemented system. The security of the method lies in selecting candidate AC-QTCs within each non-overlapping 8 × 8 sub-block using a pseudo random key. Basic performance of this steganographic technique verified through experiments on various existing MPEG-2 encoded videos over a wide range of embedded payload rates. Overall, the experimental results verify the excellent performance of the proposed EBBD with a better trade-off in terms of imperceptibility and payload, as compared with previous techniques while at the same time ensuring minimal bitrate increase and negligible degradation of PSNR values. PMID:26963093
Idbeaa, Tarik; Abdul Samad, Salina; Husain, Hafizah
2016-01-01
This paper presents a novel secure and robust steganographic technique in the compressed video domain namely embedding-based byte differencing (EBBD). Unlike most of the current video steganographic techniques which take into account only the intra frames for data embedding, the proposed EBBD technique aims to hide information in both intra and inter frames. The information is embedded into a compressed video by simultaneously manipulating the quantized AC coefficients (AC-QTCs) of luminance components of the frames during MPEG-2 encoding process. Later, during the decoding process, the embedded information can be detected and extracted completely. Furthermore, the EBBD basically deals with two security concepts: data encryption and data concealing. Hence, during the embedding process, secret data is encrypted using the simplified data encryption standard (S-DES) algorithm to provide better security to the implemented system. The security of the method lies in selecting candidate AC-QTCs within each non-overlapping 8 × 8 sub-block using a pseudo random key. Basic performance of this steganographic technique verified through experiments on various existing MPEG-2 encoded videos over a wide range of embedded payload rates. Overall, the experimental results verify the excellent performance of the proposed EBBD with a better trade-off in terms of imperceptibility and payload, as compared with previous techniques while at the same time ensuring minimal bitrate increase and negligible degradation of PSNR values.
Voxel-based Immersive Environments Immersive Environments
2000-05-31
3D accelerated hardware. While this method lends itself well to modem hardware, the quality of the resulting images was low due to the coarse sampling...pipes. We will use MPEG video compression when sending video over T1 line, whereas for 56K bit Internet connection, we can use one of the more...sent over the communication line. The ultimate goal is to send the immersive environment over the 56K bps Internet. Since we need to send audio and
An 802.11 n wireless local area network transmission scheme for wireless telemedicine applications.
Lin, C F; Hung, S I; Chiang, I H
2010-10-01
In this paper, an 802.11 n transmission scheme is proposed for wireless telemedicine applications. IEEE 802.11n standards, a power assignment strategy, space-time block coding (STBC), and an object composition Petri net (OCPN) model are adopted. With the proposed wireless system, G.729 audio bit streams, Joint Photographic Experts Group 2000 (JPEG 2000) clinical images, and Moving Picture Experts Group 4 (MPEG-4) video bit streams achieve a transmission bit error rate (BER) of 10-7, 10-4, and 103 simultaneously. The proposed system meets the requirements prescribed for wireless telemedicine applications. An essential feature of this proposed transmission scheme is that clinical information that requires a high quality of service (QoS) is transmitted at a high power transmission rate with significant error protection. For maximizing resource utilization and minimizing the total transmission power, STBC and adaptive modulation techniques are used in the proposed 802.11 n wireless telemedicine system. Further, low power, direct mapping (DM), low-error protection scheme, and high-level modulation are adopted for messages that can tolerate a high BER. With the proposed transmission scheme, the required reliability of communication can be achieved. Our simulation results have shown that the proposed 802.11 n transmission scheme can be used for developing effective wireless telemedicine systems.
Real-time 3D video compression for tele-immersive environments
NASA Astrophysics Data System (ADS)
Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William
2006-01-01
Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).
Automatic facial animation parameters extraction in MPEG-4 visual communication
NASA Astrophysics Data System (ADS)
Yang, Chenggen; Gong, Wanwei; Yu, Lu
2002-01-01
Facial Animation Parameters (FAPs) are defined in MPEG-4 to animate a facial object. The algorithm proposed in this paper to extract these FAPs is applied to very low bit-rate video communication, in which the scene is composed of a head-and-shoulder object with complex background. This paper addresses the algorithm to automatically extract all FAPs needed to animate a generic facial model, estimate the 3D motion of head by points. The proposed algorithm extracts human facial region by color segmentation and intra-frame and inter-frame edge detection. Facial structure and edge distribution of facial feature such as vertical and horizontal gradient histograms are used to locate the facial feature region. Parabola and circle deformable templates are employed to fit facial feature and extract a part of FAPs. A special data structure is proposed to describe deformable templates to reduce time consumption for computing energy functions. Another part of FAPs, 3D rigid head motion vectors, are estimated by corresponding-points method. A 3D head wire-frame model provides facial semantic information for selection of proper corresponding points, which helps to increase accuracy of 3D rigid object motion estimation.
Motion-adaptive model-assisted compatible coding with spatiotemporal scalability
NASA Astrophysics Data System (ADS)
Lee, JaeBeom; Eleftheriadis, Alexandros
1997-01-01
We introduce the concept of motion adaptive spatio-temporal model-assisted compatible (MA-STMAC) coding, a technique to selectively encode areas of different importance to the human eye in terms of space and time in moving images with the consideration of object motion. PRevious STMAC was proposed base don the fact that human 'eye contact' and 'lip synchronization' are very important in person-to-person communication. Several areas including the eyes and lips need different types of quality, since different areas have different perceptual significance to human observers. The approach provides a better rate-distortion tradeoff than conventional image coding techniques base don MPEG-1, MPEG- 2, H.261, as well as H.263. STMAC coding is applied on top of an encoder, taking full advantage of its core design. Model motion tracking in our previous STMAC approach was not automatic. The proposed MA-STMAC coding considers the motion of the human face within the STMAC concept using automatic area detection. Experimental results are given using ITU-T H.263, addressing very low bit-rate compression.
Semantic transcoding of video based on regions of interest
NASA Astrophysics Data System (ADS)
Lim, Jeongyeon; Kim, Munchurl; Kim, Jong-Nam; Kim, Kyeongsoo
2003-06-01
Traditional transcoding on multimedia has been performed from the perspectives of user terminal capabilities such as display sizes and decoding processing power, and network resources such as available network bandwidth and quality of services (QoS) etc. The adaptation (or transcoding) of multimedia contents to given such constraints has been made by frame dropping and resizing of audiovisual, as well as reduction of SNR (Signal-to-Noise Ratio) values by saving the resulting bitrates. Not only such traditional transcoding is performed from the perspective of user"s environment, but also we incorporate a method of semantic transcoding of audiovisual based on region of interest (ROI) from user"s perspective. Users can designate their interested parts in images or video so that the corresponding video contents can be adapted focused on the user"s ROI. We incorporate the MPEG-21 DIA (Digital Item Adaptation) framework in which such semantic information of the user"s ROI is represented and delivered to the content provider side as XDI (context digital item). Representation schema of our semantic information of the user"s ROI has been adopted in MPEG-21 DIA Adaptation Model. In this paper, we present the usage of semantic information of user"s ROI for transcoding and show our system implementation with experimental results.
European Union RACE program contributions to digital audiovisual communications and services
NASA Astrophysics Data System (ADS)
de Albuquerque, Augusto; van Noorden, Leon; Badique', Eric
1995-02-01
The European Union RACE (R&D in advanced communications technologies in Europe) and the future ACTS (advanced communications technologies and services) programs have been contributing and continue to contribute to world-wide developments in audio-visual services. The paper focuses on research progress in: (1) Image data compression. Several methods of image analysis leading to the use of encoders based on improved hybrid DCT-DPCM (MPEG or not), object oriented, hybrid region/waveform or knowledge-based coding methods are discussed. (2) Program production in the aspects of 3D imaging, data acquisition, virtual scene construction, pre-processing and sequence generation. (3) Interoperability and multimedia access systems. The diversity of material available and the introduction of interactive or near- interactive audio-visual services led to the development of prestandards for video-on-demand (VoD) and interworking of multimedia services storage systems and customer premises equipment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Wu-chi; Crawfis, Roger, Weide, Bruce
2002-02-01
In this project, the authors propose the research, development, and distribution of a stackable component-based multimedia streaming protocol middleware service. The goals of this stackable middleware interface include: (1) The middleware service will provide application writers and scientists easy to use interfaces that support their visualization needs. (2) The middleware service will support a variety of image compression modes. Currently, many of the network adaptation protocols for video have been developed with DCT-based compression algorithms like H.261, MPEG-1, or MPEG-2 in mind. It is expected that with advanced scientific computing applications that the lossy compression of the image data willmore » be unacceptable in certain instances. The middleware service will support several in-line lossless compression modes for error-sensitive scientific visualization data. (3) The middleware service will support two different types of streaming video modes: one for interactive collaboration of scientists and a stored video streaming mode for viewing prerecorded animations. The use of two different streaming types will allow the quality of the video delivered to the user to be maximized. Most importantly, this service will happen transparently to the user (with some basic controls exported to the user for domain specific tweaking). In the spirit of layered network protocols (like ISO and TCP/IP), application writers should not have to know a large amount about lower level network details. Currently, many example video streaming players have their congestion management techniques tightly integrated into the video player itself and are, for the most part, ''one-off'' applications. As more networked multimedia and video applications are written in the future, a larger percentage of these programmers and scientist will most likely know little about the underlying networking layer. By providing a simple, powerful, and semi-transparent middleware layer, the successful completion of this project will help serve as a catalyst to support future video-based applications, particularly those of advanced scientific computing applications.« less
Structure and self-assembly properties of a new chitosan-based amphiphile.
Huang, Yuping; Yu, Hailong; Guo, Liang; Huang, Qingrong
2010-06-17
A new chitosan-based amphiphile, octanoyl-chitosan-polyethylene glycol monomethyl ether (acylChitoMPEG), has been prepared using both hydrophobic octanoyl and hydrophilic polyethylene glycol monomethyl ether (MPEG) substitutions. The success of synthesis was confirmed by Fourier transform infrared (FT-IR) and (1)H NMR spectroscopy. The synthesized acylChitoMPEG exhibited good solubility in either aqueous solution or common organic solvents such as ethanol, acetone, and CHCl(3). The self-aggregation behavior of acylChitoMPEG in solutions was studied by a combination of pyrene fluorescence technique, dynamic light scattering, atomic force microscopy, and small-angle X-ray scattering (SAXS). The critical aggregation concentration (CAC) and hydrodynamic diameter were found to be 0.066 mg/mL and 24.4 nm, respectively. SAXS results suggested a coiled structure of the triple helical acylChitoMPEG backbone with the hydrophobic moieties hiding in the center of the backbone, and the hydrophilic MPEG chains surrounding the acylChitoMPEG backbone in a random Gaussian chain conformation. Cytotoxicity results showed that acylChitoMPEG exhibited negligible cytotoxicity even at concentrations as high as 1.0 mg/mL. All results implied that acylChitoMPEG has the potential to be used for biological or medical applications.
A generic flexible and robust approach for intelligent real-time video-surveillance systems
NASA Astrophysics Data System (ADS)
Desurmont, Xavier; Delaigle, Jean-Francois; Bastide, Arnaud; Macq, Benoit
2004-05-01
In this article we present a generic, flexible and robust approach for an intelligent real-time video-surveillance system. A previous version of the system was presented in [1]. The goal of these advanced tools is to provide help to operators by detecting events of interest in visual scenes and highlighting alarms and compute statistics. The proposed system is a multi-camera platform able to handle different standards of video inputs (composite, IP, IEEE1394 ) and which can basically compress (MPEG4), store and display them. This platform also integrates advanced video analysis tools, such as motion detection, segmentation, tracking and interpretation. The design of the architecture is optimised to playback, display, and process video flows in an efficient way for video-surveillance application. The implementation is distributed on a scalable computer cluster based on Linux and IP network. It relies on POSIX threads for multitasking scheduling. Data flows are transmitted between the different modules using multicast technology and under control of a TCP-based command network (e.g. for bandwidth occupation control). We report here some results and we show the potential use of such a flexible system in third generation video surveillance system. We illustrate the interest of the system in a real case study, which is the indoor surveillance.
Logo recognition in video by line profile classification
NASA Astrophysics Data System (ADS)
den Hollander, Richard J. M.; Hanjalic, Alan
2003-12-01
We present an extension to earlier work on recognizing logos in video stills. The logo instances considered here are rigid planar objects observed at a distance in the scene, so the possible perspective transformation can be approximated by an affine transformation. For this reason we can classify the logos by matching (invariant) line profiles. We enhance our previous method by considering multiple line profiles instead of a single profile of the logo. The positions of the lines are based on maxima in the Hough transform space of the segmented logo foreground image. Experiments are performed on MPEG1 sport video sequences to show the performance of the proposed method.
NASA Astrophysics Data System (ADS)
Gou, MaLing; Shi, HuaShan; Guo, Gang; Men, Ke; Zhang, Juan; Zheng, Lan; Li, ZhiYong; Luo, Feng; Qian, ZhiYong; Zhao, Xia; Wei, YuQuan
2011-03-01
In an attempt to improve anticancer activity and reduce systemic toxicity of doxorubicin (Dox), we encapsulated Dox in monomethoxy poly(ethylene glycol)-poly(ɛ-caprolactone) (MPEG-PCL) micelles by a novel self-assembly procedure without using surfactants, organic solvents or vigorous stirring. These Dox encapsulated MPEG-PCL (Dox/MPEG-PCL) micelles with drug loading of 4.2% were monodisperse and ~ 20 nm in diameter. The Dox can be released from the Dox/MPEG-PCL micelles; the Dox-release at pH 5.5 was faster than that at pH 7.0. Encapsulation of Dox in MPEG-PCL micelles enhanced the cellular uptake and cytotoxicity of Dox on the C-26 colon carcinoma cell in vitro, and slowed the extravasation of Dox in the transgenic zebrafish model. Compared to free Dox, Dox/MPEG-PCL micelles were more effective in inhibiting tumor growth in the subcutaneous C-26 colon carcinoma and Lewis lung carcinoma models, and prolonging survival of mice bearing these tumors. Dox/MPEG-PCL micelles also induced lower systemic toxicity than free Dox. In conclusion, incorporation of Dox in MPEG-PCL micelles enhanced the anticancer activity and decreased the systemic toxicity of Dox; these Dox/MPEG-PCL micelles are an interesting formulation of Dox and may have potential clinical applications in cancer therapy.
The Impact on Education of the World Wide Web.
ERIC Educational Resources Information Center
Hobbs, D. J.; Taylor, R. J.
This paper describes a project which created a set of World Wide Web (WWW) pages documenting the state of the art in educational multimedia design; a prototype WWW-based multimedia teaching tool--a podiatry test using HTML forms, 24-bit color images and MPEG video--was also designed, developed, and evaluated. The project was conducted between…
Parameterized Facial Expression Synthesis Based on MPEG-4
NASA Astrophysics Data System (ADS)
Raouzaiou, Amaryllis; Tsapatsoulis, Nicolas; Karpouzis, Kostas; Kollias, Stefanos
2002-12-01
In the framework of MPEG-4, one can include applications where virtual agents, utilizing both textual and multisensory data, including facial expressions and nonverbal speech help systems become accustomed to the actual feelings of the user. Applications of this technology are expected in educational environments, virtual collaborative workplaces, communities, and interactive entertainment. Facial animation has gained much interest within the MPEG-4 framework; with implementation details being an open research area (Tekalp, 1999). In this paper, we describe a method for enriching human computer interaction, focusing on analysis and synthesis of primary and intermediate facial expressions (Ekman and Friesen (1978)). To achieve this goal, we utilize facial animation parameters (FAPs) to model primary expressions and describe a rule-based technique for handling intermediate ones. A relation between FAPs and the activation parameter proposed in classical psychological studies is established, leading to parameterized facial expression analysis and synthesis notions, compatible with the MPEG-4 standard.
Detection and localization of copy-paste forgeries in digital videos.
Singh, Raahat Devender; Aggarwal, Naveen
2017-12-01
Amidst the continual march of technology, we find ourselves relying on digital videos to proffer visual evidence in several highly sensitive areas such as journalism, politics, civil and criminal litigation, and military and intelligence operations. However, despite being an indispensable source of information with high evidentiary value, digital videos are also extremely vulnerable to conscious manipulations. Therefore, in a situation where dependence on video evidence is unavoidable, it becomes crucial to authenticate the contents of this evidence before accepting them as an accurate depiction of reality. Digital videos can suffer from several kinds of manipulations, but perhaps, one of the most consequential forgeries is copy-paste forgery, which involves insertion/removal of objects into/from video frames. Copy-paste forgeries alter the information presented by the video scene, which has a direct effect on our basic understanding of what that scene represents, and so, from a forensic standpoint, the challenge of detecting such forgeries is especially significant. In this paper, we propose a sensor pattern noise based copy-paste detection scheme, which is an improved and forensically stronger version of an existing noise-residue based technique. We also study a demosaicing artifact based image forensic scheme to estimate the extent of its viability in the domain of video forensics. Furthermore, we suggest a simplistic clustering technique for the detection of copy-paste forgeries, and determine if it possess the capabilities desired of a viable and efficacious video forensic scheme. Finally, we validate these schemes on a set of realistically tampered MJPEG, MPEG-2, MPEG-4, and H.264/AVC encoded videos in a diverse experimental set-up by varying the strength of post-production re-compressions and transcodings, bitrates, and sizes of the tampered regions. Such an experimental set-up is representative of a neutral testing platform and simulates a real-world forgery scenario where the forensic investigator has no control over any of the variable parameters of the tampering process. When tested in such an experimental set-up, the four forensic schemes achieved varying levels of detection accuracies and exhibited different scopes of applicabilities. For videos compressed using QFs in the range 70-100, the existing noise residue based technique generated average detection accuracy in the range 64.5%-82.0%, while the proposed sensor pattern noise based scheme generated average accuracy in the range 89.9%-98.7%. For the aforementioned range of QFs, average accuracy rates achieved by the suggested clustering technique and the demosaicing artifact based approach were in the range 79.1%-90.1% and 83.2%-93.3%, respectively. Copyright © 2017 Elsevier B.V. All rights reserved.
Progesterone binding nano-carriers based on hydrophobically modified hyperbranched polyglycerols
NASA Astrophysics Data System (ADS)
Alizadeh Noghani, M.; Brooks, D. E.
2016-02-01
Progesterone (Pro) is a potent neurosteroid and promotes recovery from moderate Traumatic Brain Injury but its clinical application is severely impeded by its poor water solubility. Here we demonstrate that reversibly binding Pro within hydrophobically modified hyperbranched polyglycerol (HPG-Cn-MPEG) enhances its solubility, stability and bioavailability. Synthesis, characterization and Pro loading into HPG-Cn-MPEG is described. The release kinetics are correlated with structural properties and the results of Differential Scanning Calorimetry studies of a family of HPG-Cn-MPEGs of varying molecular weight and alkylation. While the maximum amount of Pro bound correlates well with the amount of alkyl carbon per molecule contributing to its hydrophobicity, the dominant first order rate constant for Pro release correlates strongly with the amount of structured or bound water in the dendritic domain of the polymer. The results provide evidence to justify more detailed studies of interactions with biological systems, both single cells and in animal models.Progesterone (Pro) is a potent neurosteroid and promotes recovery from moderate Traumatic Brain Injury but its clinical application is severely impeded by its poor water solubility. Here we demonstrate that reversibly binding Pro within hydrophobically modified hyperbranched polyglycerol (HPG-Cn-MPEG) enhances its solubility, stability and bioavailability. Synthesis, characterization and Pro loading into HPG-Cn-MPEG is described. The release kinetics are correlated with structural properties and the results of Differential Scanning Calorimetry studies of a family of HPG-Cn-MPEGs of varying molecular weight and alkylation. While the maximum amount of Pro bound correlates well with the amount of alkyl carbon per molecule contributing to its hydrophobicity, the dominant first order rate constant for Pro release correlates strongly with the amount of structured or bound water in the dendritic domain of the polymer. The results provide evidence to justify more detailed studies of interactions with biological systems, both single cells and in animal models. Electronic supplementary information (ESI) available: Fig. S-1: chemical structure of progesterone (Pro). Fig. S-2: 1H NMR spectrum of HPG-C8-MPEG. Fig. S-3: GPC chromatogram of HPG-C8-MPEG. Fig. S-4: 1H NMR spectrum of HPG-C12-MPEG. Fig. S-5: GPC chromatogram of HPG-C8-MPEG. Fig. S-6: FTIR spectrum of HPG-C8-MPEG. Fig. S-7: inverse-gated 13C NMR spectrum of HPG-C8-MPEG in methanol-d4. Fig. S-8: semi-log plot to determine initial rapid release kinetics for HPG-C8-MPEG/Pro in PBS. Fig. S-9: semi-log plot to determine secondary slow release kinetics for HPG-C8-MPEG/Pro in PBS. Fig. S-10: semi-log plot illustrating the kinetics of Pro release from HPG-C8-MPEG/Pro in plasma. Fig. S-11: dependence of k1 and Vp - Va. Fig. S-12: correlation between the maximum binding capacity of HPG-Cn-MPEG polymeric systems for binding Pro and their total mass of alkyl carbon external to the oxygen (R2 = 0.77 and p < 0.025). Table S-1: effect of loaded Pro on HPG-Cn-MPEG size. Fig. S-13. DLS size determination of HPG-C10-MPEG at 2 mg ml-1 (on the left) and HPG-C10-MPEG/Pro at 2 mg ml-1 of polymer and 25 μg ml-1 of Pro (on the right). The minor population of larger particles was reduced in diameter by Pro binding, illustrated above, consistent with an earlier report.11 See DOI: 10.1039/c5nr08175k
Computationally Efficient Clustering of Audio-Visual Meeting Data
NASA Astrophysics Data System (ADS)
Hung, Hayley; Friedland, Gerald; Yeo, Chuohao
This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors, comprising a limited number of cameras and microphones. We first demonstrate computationally efficient algorithms that can identify who spoke and when, a problem in speech processing known as speaker diarization. We also extract visual activity features efficiently from MPEG4 video by taking advantage of the processing that was already done for video compression. Then, we present a method of associating the audio-visual data together so that the content of each participant can be managed individually. The methods presented in this article can be used as a principal component that enables many higher-level semantic analysis tasks needed in search, retrieval, and navigation.
Binary Format for Scene (BIFS): combining MPEG-4 media to build rich multimedia services
NASA Astrophysics Data System (ADS)
Signes, Julien
1998-12-01
In this paper, we analyze the design concepts and some technical details behind the MPEG-4 standard, particularly the scene description layer, commonly known as the Binary Format for Scene (BIFS). We show how MPEG-4 may ease multimedia proliferation by offering a unique, optimized multimedia platform. Lastly, we analyze the potential of the technology for creating rich multimedia applications on various networks and platforms. An e-commerce application example is detailed, highlighting the benefits of the technology. Compression results show how rich applications may be built even on very low bit rate connections.
Perceptual video quality comparison of 3DTV broadcasting using multimode service systems
NASA Astrophysics Data System (ADS)
Ok, Jiheon; Lee, Chulhee
2015-05-01
Multimode service (MMS) systems allow broadcasters to provide multichannel services using a single HD channel. Using these systems, it is possible to provide 3DTV programs that can be watched either in three-dimensional (3-D) or two-dimensional (2-D) modes with backward compatibility. In the MMS system for 3DTV broadcasting using the Advanced Television Systems Committee standards, the left and the right views are encoded using MPEG-2 and H.264, respectively, and then transmitted using a dual HD streaming format. The left view, encoded using MPEG-2, assures 2-D backward compatibility while the right view, encoded using H.264, can be optionally combined with the left view to generate stereoscopic 3-D views. We analyze 2-D and 3-D perceptual quality when using the MMS system by comparing items in the frame-compatible format (top-bottom), which is a conventional transmission scheme for 3-D broadcasting. We performed perceptual 2-D and 3-D video quality evaluation assuming 3DTV programs are encoded using the MMS system and top-bottom format. The results show that MMS systems can be preferable with regard to perceptual 2-D and 3-D quality and backward compatibility.
Development of a microportable imaging system for otoscopy and nasoendoscopy evaluations.
VanLue, Michael; Cox, Kenneth M; Wade, James M; Tapp, Kevin; Linville, Raymond; Cosmato, Charlie; Smith, Tom
2007-03-01
Imaging systems for patients with cleft palate typically are not portable, but are essential to obtain an audiovisual record of nasoendoscopy and otoscopy procedures. Practitioners who evaluate patients in rural, remote, or otherwise medically underserved areas are expected to obtain audiovisual recordings of these procedures as part of standard clinical practice. Therefore, patients must travel substantial distances to medical facilities that have standard recording equipment. This project describes the specific components, strengths and weaknesses of an MPEG-4 digital recording system for otoscopy/nasoendoscopy evaluation of patients with cleft palate that is both portable and compatible with store-and-forward telemedicine applications. Three digital recording configurations (TabletPC, handheld digital video recorder, and an 8-mm digital camcorder) were used to record the audio/ video signal from an analog video scope system. The handheld digital video recorder was most effective at capturing audio/video and displaying procedures in real time. The system described was particularly easy to use, because it required no postrecording file capture or compression for later review, transfer, and/or archiving. The handheld digital recording system was assembled from commercially available components. The portability and the telemedicine compatibility of the handheld digital video recorder offers a viable solution for the documentation of nasoendosocopy and otoscopy procedures in remote, rural, or other locations where reduced medical access precludes the use of larger component audio/video systems.
A Subband Coding Method for HDTV
NASA Technical Reports Server (NTRS)
Chung, Wilson; Kossentini, Faouzi; Smith, Mark J. T.
1995-01-01
This paper introduces a new HDTV coder based on motion compensation, subband coding, and high order conditional entropy coding. The proposed coder exploits the temporal and spatial statistical dependencies inherent in the HDTV signal by using intra- and inter-subband conditioning for coding both the motion coordinates and the residual signal. The new framework provides an easy way to control the system complexity and performance, and inherently supports multiresolution transmission. Experimental results show that the coder outperforms MPEG-2, while still maintaining relatively low complexity.
MPEG-4-based 2D facial animation for mobile devices
NASA Astrophysics Data System (ADS)
Riegel, Thomas B.
2005-03-01
The enormous spread of mobile computing devices (e.g. PDA, cellular phone, palmtop, etc.) emphasizes scalable applications, since users like to run their favorite programs on the terminal they operate at that moment. Therefore appliances are of interest, which can be adapted to the hardware realities without loosing a lot of their functionalities. A good example for this is "Facial Animation," which offers an interesting way to achieve such "scalability." By employing MPEG-4, which provides an own profile for facial animation, a solution for low power terminals including mobile phones is demonstrated. From the generic 3D MPEG-4 face a specific 2D head model is derived, which consists primarily of a portrait image superposed by a suited warping mesh and adapted 2D animation rules. Thus the animation process of MPEG-4 need not be changed and standard compliant facial animation parameters can be used to displace the vertices of the mesh and warp the underlying image accordingly.
Introducing a Public Stereoscopic 3D High Dynamic Range (SHDR) Video Database
NASA Astrophysics Data System (ADS)
Banitalebi-Dehkordi, Amin
2017-03-01
High dynamic range (HDR) displays and cameras are paving their ways through the consumer market at a rapid growth rate. Thanks to TV and camera manufacturers, HDR systems are now becoming available commercially to end users. This is taking place only a few years after the blooming of 3D video technologies. MPEG/ITU are also actively working towards the standardization of these technologies. However, preliminary research efforts in these video technologies are hammered by the lack of sufficient experimental data. In this paper, we introduce a Stereoscopic 3D HDR database of videos that is made publicly available to the research community. We explain the procedure taken to capture, calibrate, and post-process the videos. In addition, we provide insights on potential use-cases, challenges, and research opportunities, implied by the combination of higher dynamic range of the HDR aspect, and depth impression of the 3D aspect.
Overview of FTV (free-viewpoint television)
NASA Astrophysics Data System (ADS)
Tanimoto, Masayuki
2010-07-01
We have developed a new type of television named FTV (Free-viewpoint TV). FTV is the ultimate 3DTV that enables us to view a 3D scene by freely changing our viewpoints. We proposed the concept of FTV and constructed the world's first real-time system including the complete chain of operation from image capture to display. FTV is based on the rayspace method that represents one ray in real space with one point in the ray-space. We have developed ray capture, processing and display technologies for FTV. FTV can be carried out today in real time on a single PC or on a mobile player. We also realized FTV with free listening-point audio. The international standardization of FTV has been conducted in MPEG. The first phase of FTV was MVC (Multi-view Video Coding) and the second phase is 3DV (3D Video). MVC was completed in May 2009. The Blu-ray 3D specification has adopted MVC for compression. 3DV is a standard that targets serving a variety of 3D displays. The view generation function of FTV is used to decouple capture and display in 3DV. FDU (FTV Data Unit) is proposed as a data format for 3DV. FTU can compensate errors of the synthesized views caused by depth error.
Complete regression of xenograft tumors using biodegradable mPEG-PLA-SN38 block copolymer micelles.
Lu, Lu; Zheng, Yan; Weng, Shuqiang; Zhu, Wenwei; Chen, Jinhong; Zhang, Xiaomin; Lee, Robert J; Yu, Bo; Jia, Huliang; Qin, Lunxiu
2016-06-01
7-Ethyl-10-hydroxy-comptothecin (SN38) is an active metabolite of irinotecan (CPT-11) and the clinical application of SN38 is limited by its hydrophobicity and instability. To address these issues, a series of novel amphiphilic mPEG-PLA-SN38-conjugates were synthesized by linking SN38 to mPEG-PLA-SA, and they could form micelles by self-assembly. The effects of mPEG-PLA composition were studied in vitro and in vivo. The mean diameters of mPEG2K-PLA-SN38 micelles and mPEG4K-PLA-SN38 micelles were 10-20nm and 120nm, respectively, and mPEG2K-PLA-SN38 micelles showed greater antitumor efficacy than mPEG4K-PLA-SN38 micelles both in vitro and in vivo. These data suggest that the lengths of mPEG and PLA chains had a major impact on the physicochemical characteristics and antitumor activity of SN38-conjugate micelles. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lee, Seungwon; Park, Ilkwon; Kim, Manbae; Byun, Hyeran
2006-10-01
As digital broadcasting technologies have been rapidly progressed, users' expectations for realistic and interactive broadcasting services also have been increased. As one of such services, 3D multi-view broadcasting has received much attention recently. In general, all the view sequences acquired at the server are transmitted to the client. Then, the user can select a part of views or all the views according to display capabilities. However, this kind of system requires high processing power of the server as well as the client, thus posing a difficulty in practical applications. To overcome this problem, a relatively simple method is to transmit only two view-sequences requested by the client in order to deliver a stereoscopic video. In this system, effective communication between the server and the client is one of important aspects. In this paper, we propose an efficient multi-view system that transmits two view-sequences and their depth maps according to user's request. The view selection process is integrated into MPEG-21 DIA (Digital Item Adaptation) so that our system is compatible to MPEG-21 multimedia framework. DIA is generally composed of resource adaptation and descriptor adaptation. It is one of merits that SVA (stereoscopic video adaptation) descriptors defined in DIA standard are used to deliver users' preferences and device capabilities. Furthermore, multi-view descriptions related to multi-view camera and system are newly introduced. The syntax of the descriptions and their elements is represented in XML (eXtensible Markup Language) schema. If the client requests an adapted descriptor (e.g., view numbers) to the server, then the server sends its associated view sequences. Finally, we present a method which can reduce user's visual discomfort that might occur while viewing stereoscopic video. This phenomenon happens when view changes as well as when a stereoscopic image produces excessive disparity caused by a large baseline between two cameras. To solve for the former, IVR (intermediate view reconstruction) is employed for smooth transition between two stereoscopic view sequences. As well, a disparity adjustment scheme is used for the latter. Finally, from the implementation of testbed and the experiments, we can show the valuables and possibilities of our system.
Autosophy information theory provides lossless data and video compression based on the data content
NASA Astrophysics Data System (ADS)
Holtz, Klaus E.; Holtz, Eric S.; Holtz, Diana
1996-09-01
A new autosophy information theory provides an alternative to the classical Shannon information theory. Using the new theory in communication networks provides both a high degree of lossless compression and virtually unbreakable encryption codes for network security. The bandwidth in a conventional Shannon communication is determined only by the data volume and the hardware parameters, such as image size; resolution; or frame rates in television. The data content, or what is shown on the screen, is irrelevant. In contrast, the bandwidth in autosophy communication is determined only by data content, such as novelty and movement in television images. It is the data volume and hardware parameters that become irrelevant. Basically, the new communication methods use prior 'knowledge' of the data, stored in a library, to encode subsequent transmissions. The more 'knowledge' stored in the libraries, the higher the potential compression ratio. 'Information' is redefined as that which is not already known by the receiver. Everything already known is redundant and need not be re-transmitted. In a perfect communication each transmission code, called a 'tip,' creates a new 'engram' of knowledge in the library in which each tip transmission can represent any amount of data. Autosophy theories provide six separate learning modes, or omni dimensional networks, all of which can be used for data compression. The new information theory reveals the theoretical flaws of other data compression methods, including: the Huffman; Ziv Lempel; LZW codes and commercial compression codes such as V.42bis and MPEG-2.
Recovering DC coefficients in block-based DCT.
Uehara, Takeyuki; Safavi-Naini, Reihaneh; Ogunbona, Philip
2006-11-01
It is a common approach for JPEG and MPEG encryption systems to provide higher protection for dc coefficients and less protection for ac coefficients. Some authors have employed a cryptographic encryption algorithm for the dc coefficients and left the ac coefficients to techniques based on random permutation lists which are known to be weak against known-plaintext and chosen-ciphertext attacks. In this paper we show that in block-based DCT, it is possible to recover dc coefficients from ac coefficients with reasonable image quality and show the insecurity of image encryption methods which rely on the encryption of dc values using a cryptoalgorithm. The method proposed in this paper combines dc recovery from ac coefficients and the fact that ac coefficients can be recovered using a chosen ciphertext attack. We demonstrate that a method proposed by Tang to encrypt and decrypt MPEG video can be completely broken.
Hierarchical video summarization
NASA Astrophysics Data System (ADS)
Ratakonda, Krishna; Sezan, M. Ibrahim; Crinon, Regis J.
1998-12-01
We address the problem of key-frame summarization of vide in the absence of any a priori information about its content. This is a common problem that is encountered in home videos. We propose a hierarchical key-frame summarization algorithm where a coarse-to-fine key-frame summary is generated. A hierarchical key-frame summary facilitates multi-level browsing where the user can quickly discover the content of the video by accessing its coarsest but most compact summary and then view a desired segment of the video with increasingly more detail. At the finest level, the summary is generated on the basis of color features of video frames, using an extension of a recently proposed key-frame extraction algorithm. The finest level key-frames are recursively clustered using a novel pairwise K-means clustering approach with temporal consecutiveness constraint. We also address summarization of MPEG-2 compressed video without fully decoding the bitstream. We also propose efficient mechanisms that facilitate decoding the video when the hierarchical summary is utilized in browsing and playback of video segments starting at selected key-frames.
Rich media streaming for just-in-time training of first responders
NASA Astrophysics Data System (ADS)
Bandera, Cesar; Marsico, Michael
2005-05-01
The diversity of first responders and of asymmetric threats precludes the effectiveness of any single training syllabus. Just-in-time training (JITT) addresses this variability, but requires training content to be quickly tailored to the subject (the threat), the learner (the responder), and the infrastructure (the C2 chain from DHS to the responder"s equipment). We present a distributed system for personalized just-in-time training of first responders. The authoring and delivery of interactive rich media and simulations, and the integration of JITT with C2 centers, are demonstrated. Live and archived video, imagery, 2-D and 3-D models, and simulations are autonomously (1) aggregated from object-oriented databases into SCORM-compliant objects, (2) tailored to the individual learner"s training history, preferences, connectivity and computing platform (from workstations to wireless PDAs), (3) conveyed as secure and reliable MPEG-4 compliant streams with data rights management, and (4) rendered as interactive high-definition rich media that promotes knowledge retention and the refinement of learner skills without the need of special hardware. We review the object-oriented implications of SCORM and the higher level profiles of the MPEG-4 standard, and show how JITT can be integrated into - and improve the ROI of - existing training infrastructures, including COTS content authoring tools, LMS/CMS, man-in-the-loop simulators, and legacy content. Lastly, we compare the audiovisual quality of different streaming platforms under varying connectivity conditions.
NASA Astrophysics Data System (ADS)
2001-01-01
Last year saw very good progress at ESO's Paranal Observatory , the site of the Very Large Telescope (VLT). The third and fourth 8.2-m Unit Telescopes, MELIPAL and YEPUN had "First Light" (cf. PR 01/00 and PR 18/00 ), while the first two, ANTU and KUEYEN , were busy collecting first-class data for hundreds of astronomers. Meanwhile, work continued towards the next phase of the VLT project, the combination of the telescopes into the VLT Interferometer. The test instrument, VINCI (cf. PR 22/00 ) is now being installed in the VLTI Laboratory at the centre of the observing platform on the top of Paranal. Below is a new collection of video sequences and photos that illustrate the latest developments at the Paranal Observatory. The were obtained by the EPR Video Team in December 2000. The photos are available in different formats, including "high-resolution" that is suitable for reproduction purposes. A related ESO Video News Reel for professional broadcasters will soon become available and will be announced via the usual channels. Overview Paranal Observatory (Dec. 2000) Video Clip 02a/01 [MPEG - 4.5Mb] ESO PR Video Clip 02a/01 "Paranal Observatory (December 2000)" (4875 frames/3:15 min) [MPEG Video+Audio; 160x120 pix; 4.5Mb] [MPEG Video+Audio; 320x240 pix; 13.5 Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 02a/01 shows some of the construction activities at the Paranal Observatory in December 2000, beginning with a general view of the site. Then follow views of the Residencia , a building that has been designed by Architects Auer and Weber in Munich - it integrates very well into the desert, creating a welcome recreational site for staff and visitors in this harsh environment. The next scenes focus on the "stations" for the auxiliary telescopes for the VLTI and the installation of two delay lines in the 140-m long underground tunnel. The following part of the video clip shows the start-up of the excavation work for the 2.6-m VLT Survey Telescope (VST) as well as the location known as the "NTT Peak", now under consideration for the installation of the 4-m VISTA telescope. The last images are from to the second 8.2-m Unit Telescope, KUEYEN, that has been in full use by the astronomers with the UVES and FORS2 instruments since April 2000. ESO PR Photo 04a/01 ESO PR Photo 04a/01 [Preview - JPEG: 466 x 400 pix - 58k] [Normal - JPEG: 931 x 800 pix - 688k] [Hires - JPEG: 3000 x 2577 pix - 7.6M] Caption : PR Photo 04a/01 shows an afternoon view from the Paranal summit towards East, with the Base Camp and the new Residencia on the slope to the right, above the valley in the shadow of the mountain. ESO PR Photo 04b/01 ESO PR Photo 04b/01 [Preview - JPEG: 791 x 400 pix - 89k] [Normal - JPEG: 1582 x 800 pix - 1.1Mk] [Hires - JPEG: 3000 x 1517 pix - 3.6M] PR Photo 04b/01 shows the ramp leading to the main entrance to the partly subterranean Residencia , with the steel skeleton for the dome over the central area in place. ESO PR Photo 04c/01 ESO PR Photo 04c/01 [Preview - JPEG: 498 x 400 pix - 65k] [Normal - JPEG: 995 x 800 pix - 640k] [Hires - JPEG: 3000 x 2411 pix - 6.6M] PR Photo 04c/01 is an indoor view of the reception hall under the dome, looking towards the main entrance. ESO PR Photo 04d/01 ESO PR Photo 04d/01 [Preview - JPEG: 472 x 400 pix - 61k] [Normal - JPEG: 944 x 800 pix - 632k] [Hires - JPEG: 3000 x 2543 pix - 5.8M] PR Photo 04d/01 shows the ramps from the reception area towards the rooms. The VLT Interferometer The Delay Lines consitute a most important element of the VLT Interferometer , cf. PR Photos 26a-e/00. At this moment, two Delay Lines are operational on site. A third system will be integrated early this year. The VLTI Delay Line is located in an underground tunnel that is 168 metres long and 8 metres wide. This configuration has been designed to accommodate up to eight Delay Lines, including their transfer optics in an ideal environment: stable temperature, high degree of cleanliness, low levels of straylight, low air turbulence. The positions of the Delay Line carriages are computed to adjust the Optical Path Lengths requested for the fringe pattern observation. The positions are controlled in real time by a laser metrology system, specially developed for this purpose. The position precision is about 20 nm (1 nm = 10 -9 m, or 1 millionth of a millimetre) over a distance of 120 metres. The maximum velocity is 0.50 m/s in position mode and maximum 0.05 m/s in operation. The system is designed for 25 year of operation and to survive earthquake up to 8.6 magnitude on the Richter scale. The VLTI Delay Line is a three-year project, carried out by ESO in collaboration with Dutch Space Holdings (formerly Fokker Space) and TPD-TNO . VLTI Delay Lines (December 2000) - ESO PR Video Clip 02b/01 [MPEG - 3.6Mb] ESO PR Video Clip 02b/01 "VLTI Delay Lines (December 2000)" (2000 frames/1:20 min) [MPEG Video+Audio; 160x120 pix; 3.6Mb] [MPEG Video+Audio; 320x240 pix; 13.7 Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 02b/00 shows the Delay Lines of the VLT Interferometer facility at Paranal during tests. One of the carriages is moving on 66-metre long rectified rails, driven by a linear motor. The carriage is equipped with three wheels in order to preserve high guidance accuracy. Another important element is the Cat's Eye that reflects the light from the telescope to the VLT instrumentation. This optical system is made of aluminium (including the mirrors) to avoid thermo-mechanical problems. ESO PR Photo 04e/01 ESO PR Photo 04e/01 [Preview - JPEG: 400 x 402 pix - 62k] [Normal - JPEG: 800 x 804 pix - 544k] [Hires - JPEG: 3000 x 3016 pix - 6.2M] Caption : PR Photo 04e/01 shows one of the 30 "stations" for the movable 1.8-m Auxiliary Telescopes. When one of these telescopes is positioned ("parked") on top of it, The light will be guided through the hole towards the Interferometric Tunnel and the Delay Lines. ESO PR Photo 04f/01 ESO PR Photo 04f/01 [Preview - JPEG: 568 x 400 pix - 96k] [Normal - JPEG: 1136 x 800 pix - 840k] [Hires - JPEG: 3000 x 2112 pix - 4.6M] PR Photo 04f/01 shows a general view of the Interferometric Tunnel and the Delay Lines. ESO PR Photo 04g/01 ESO PR Photo 04g/01 [Preview - JPEG: 406 x 400 pix - 62k] [Normal - JPEG: 812 x 800 pix - 448k] [Hires - JPEG: 3000 x 2956 pix - 5.5M] PR Photo 04g/01 shows one of the Delay Line carriages in parking position. The "NTT Peak" The "NTT Peak" is a mountain top located about 2 km to the north of Paranal. It received this name when ESO considered to move the 3.58-m New Technology Telescope from La Silla to this peak. The possibility of installing the 4-m VISTA telescope (cf. PR 03/00 ) on this peak is now being discussed. ESO PR Photo 04h/01 ESO PR Photo 04h/01 [Preview - JPEG: 630 x 400 pix - 89k] [Normal - JPEG: 1259 x 800 pix - 1.1M] [Hires - JPEG: 3000 x 1907 pix - 5.2M] PR Photo 04h/01 shows the view from the "NTT Peak" towards south, vith the Paranal mountain and the VLT enclosures in the background. ESO PR Photo 04i/01 ESO PR Photo 04i/01 [Preview - JPEG: 516 x 400 pix - 50k] [Normal - JPEG: 1031 x 800 pix - 664k] [Hires - JPEG: 3000 x 2328 pix - 6.0M] PR Photo 04i/01 is a view towards the "NTT Peak" from the top of the Paranal mountain. The access road and the concrete pillar that was used to support a site testing telescope at the top of this peak are seen This is the caption to ESO PR Photos 04a-1/01 and PR Video Clips 02a-b/01 . They may be reproduced, if credit is given to the European Southern Observatory. The ESO PR Video Clips service to visitors to the ESO website provides "animated" illustrations of the ongoing work and events at the European Southern Observatory. The most recent clip was: ESO PR Video Clip 01/01 about the Physics On Stage Festival (11 January 2001) . Information is also available on the web about other ESO videos.
AVC/H.264 patent portfolio license
NASA Astrophysics Data System (ADS)
Skandalis, Dean A.
2006-08-01
MPEG LA, LLC offers a joint patent license for the AVC (a/k/a H.264) Standard (ISO/IEC IS 14496-10:2004). Like MPEG LA's other licenses, the AVC Patent Portfolio License is offered for the convenience of the marketplace as an alternative enabling users to access essential intellectual property owned by many patent holders under a single license rather than negotiating licenses with each of them individually. The AVC Patent Portfolio License includes essential patents owned by DAEWOO Electronics Corporation; Electronics and Telecommunications Research Institute (ETRI); France Telecom, societe anonyme; Fujitsu Limited; Hitachi, Ltd.; Koninklijke Philips Electronics N.V.; LG Electronics Inc.; Matsushita Electric Industrial Co., Ltd.; Microsoft Corporation; Mitsubishi Electric Corporation; Robert Bosch GmbH; Samsung Electronics Co., Ltd.; Sedna Patent Services, LLC; Sharp Kabushiki Kaisha; Siemens AG; Sony Corporation; The Trustees of Columbia University in the City of New York; Toshiba Corporation; UB Video Inc.; and Victor Company of Japan, Limited. Another is expected also to join as of August 1, 2006. MPEG LA's objective is to provide worldwide access to as much AVC essential intellectual property as possible for the benefit of AVC users. Therefore, any party that believes it has essential patents is welcome to submit them for evaluation of their essentiality and inclusion in the License if found essential.
Macrophage-expressed perforins mpeg1 and mpeg1.2 have an anti-bacterial function in zebrafish.
Benard, Erica L; Racz, Peter I; Rougeot, Julien; Nezhinsky, Alexander E; Verbeek, Fons J; Spaink, Herman P; Meijer, Annemarie H
2015-01-01
Macrophage-expressed gene 1 (MPEG1) encodes an evolutionarily conserved protein with a predicted membrane attack complex/perforin domain associated with host defence against invading pathogens. In vertebrates, MPEG1/perforin-2 is an integral membrane protein of macrophages, suspected to be involved in the killing of intracellular bacteria by pore-forming activity. Zebrafish have 3 copies of MPEG1; 2 are expressed in macrophages, whereas the third could be a pseudogene. The mpeg1 and mpeg1.2 genes show differential regulation during infection of zebrafish embryos with the bacterial pathogens Mycobacterium marinum and Salmonella typhimurium. While mpeg1 is downregulated during infection with both pathogens, mpeg1.2 is infection inducible. Upregulation of mpeg1.2 is partially dependent on the presence of functional Mpeg1 and requires the Toll-like receptor adaptor molecule MyD88 and the transcription factor NFκB. Knockdown of mpeg1 alters the immune response to M. marinum infection and results in an increased bacterial burden. In Salmonella typhimurium infection, both mpeg1 and mpeg1.2 knockdown increase the bacterial burdens, but mpeg1 morphants show increased survival times. The combined results of these two in vivo infection models support the anti-bacterial function of the MPEG1/perforin-2 family and indicate that the intricate cross-regulation of the two mpeg1 copies aids the zebrafish host in combatting infection of various pathogens. © 2014 S. Karger AG, Basel.
A Brazilian educational experiment: teleradiology on web TV.
Silva, Angélica Baptista; de Amorim, Annibal Coelho
2009-01-01
Since 2004, educational videoconferences have been held in Brazil for paediatric radiologists in training. The RUTE network has been used, a high-speed national research and education network. Twelve videoconferences were recorded by the Health Channel and transformed into TV programmes, both for conventional broadcast and for access via the Internet. Between October 2007 and December 2009 the Health Channel website registered 2378 hits. Our experience suggests that for successful recording of multipoint videoconferences, four areas are important: (1) a pre-planned script is required, for both physicians and film-makers; (2) particular care is necessary when editing the audiovisual material; (3) the audio and video equipment requires careful adjustment to preserve clinical discussions and the quality of radiology images; (4) to produce a product suitable for both TV sets and computer devices, the master tape needs to be encoded in low resolution digital video formats for Internet media (wmv and rm format for streaming, and compressed zip files for downloading) and MPEG format for DVDs.
Fundamental study of compression for movie files of coronary angiography
NASA Astrophysics Data System (ADS)
Ando, Takekazu; Tsuchiya, Yuichiro; Kodera, Yoshie
2005-04-01
When network distribution of movie files was considered as reference, it could be useful that the lossy compression movie files which has small file size. We chouse three kinds of coronary stricture movies with different moving speed as an examination object; heart rate of slow, normal and fast movies. The movies of MPEG-1, DivX5.11, WMV9 (Windows Media Video 9), and WMV9-VCM (Windows Media Video 9-Video Compression Manager) were made from three kinds of AVI format movies with different moving speeds. Five kinds of movies that are four kinds of compression movies and non-compression AVI instead of the DICOM format were evaluated by Thurstone's method. The Evaluation factors of movies were determined as "sharpness, granularity, contrast, and comprehensive evaluation." In the virtual bradycardia movie, AVI was the best evaluation at all evaluation factors except the granularity. In the virtual normal movie, an excellent compression technique is different in all evaluation factors. In the virtual tachycardia movie, MPEG-1 was the best evaluation at all evaluation factors expects the contrast. There is a good compression form depending on the speed of movies because of the difference of compression algorithm. It is thought that it is an influence by the difference of the compression between frames. The compression algorithm for movie has the compression between the frames and the intra-frame compression. As the compression algorithm give the different influence to image by each compression method, it is necessary to examine the relation of the compression algorithm and our results.
NASA Astrophysics Data System (ADS)
Riera-Palou, Felip; den Brinker, Albertus C.
2007-12-01
This paper introduces a new audio and speech broadband coding technique based on the combination of a pulse excitation coder and a standardized parametric coder, namely, MPEG-4 high-quality parametric coder. After presenting a series of enhancements to regular pulse excitation (RPE) to make it suitable for the modeling of broadband signals, it is shown how pulse and parametric codings complement each other and how they can be merged to yield a layered bit stream scalable coder able to operate at different points in the quality bit rate plane. The performance of the proposed coder is evaluated in a listening test. The major result is that the extra functionality of the bit stream scalability does not come at the price of a reduced performance since the coder is competitive with standardized coders (MP3, AAC, SSC).
NASA Astrophysics Data System (ADS)
Tinker, Michael
1998-12-01
We are on the brink of transforming the movie theatre with electronic cinema. Technologies are converging to make true electronic cinema, with a 'film look,' possible for the first time. In order to realize the possibilities, we must leverage current technologies in video compression, electronic projection, digital storage, and digital networks. All these technologies have only recently improved sufficiently to make their use in the electronic cinema worthwhile. Video compression, such as MPEG-2, is designed to overcome the limitations of video, primarily limited bandwidth. As a result, although HDTV offers a serious challenge to film-based cinema, it falls short in a number of areas, such as color depth. Freed from the constraints of video transmission, and using the recently improved technologies available, electronic cinema can move beyond video; Although movies will have to be compressed for some time, what is needed is a concept of 'cinema compression,' rather than video compression. Electronic cinema will open up vast new possibilities for viewing experiences at the theater, while at the same time offering up the potential for new economies in the movie industry.
Robust 3D DFT video watermarking
NASA Astrophysics Data System (ADS)
Deguillaume, Frederic; Csurka, Gabriela; O'Ruanaidh, Joseph J.; Pun, Thierry
1999-04-01
This paper proposes a new approach for digital watermarking and secure copyright protection of videos, the principal aim being to discourage illicit copying and distribution of copyrighted material. The method presented here is based on the discrete Fourier transform (DFT) of three dimensional chunks of video scene, in contrast with previous works on video watermarking where each video frame was marked separately, or where only intra-frame or motion compensation parameters were marked in MPEG compressed videos. Two kinds of information are hidden in the video: a watermark and a template. Both are encoded using an owner key to ensure the system security and are embedded in the 3D DFT magnitude of video chunks. The watermark is a copyright information encoded in the form of a spread spectrum signal. The template is a key based grid and is used to detect and invert the effect of frame-rate changes, aspect-ratio modification and rescaling of frames. The template search and matching is performed in the log-log-log map of the 3D DFT magnitude. The performance of the presented technique is evaluated experimentally and compared with a frame-by-frame 2D DFT watermarking approach.
Annotation of UAV surveillance video
NASA Astrophysics Data System (ADS)
Howlett, Todd; Robertson, Mark A.; Manthey, Dan; Krol, John
2004-08-01
Significant progress toward the development of a video annotation capability is presented in this paper. Research and development of an object tracking algorithm applicable for UAV video is described. Object tracking is necessary for attaching the annotations to the objects of interest. A methodology and format is defined for encoding video annotations using the SMPTE Key-Length-Value encoding standard. This provides the following benefits: a non-destructive annotation, compliance with existing standards, video playback in systems that are not annotation enabled and support for a real-time implementation. A model real-time video annotation system is also presented, at a high level, using the MPEG-2 Transport Stream as the transmission medium. This work was accomplished to meet the Department of Defense"s (DoD"s) need for a video annotation capability. Current practices for creating annotated products are to capture a still image frame, annotate it using an Electric Light Table application, and then pass the annotated image on as a product. That is not adequate for reporting or downstream cueing. It is too slow and there is a severe loss of information. This paper describes a capability for annotating directly on the video.
Dong, Kai; Yan, Yan; Wang, Pengchong; Shi, Xianpeng; Zhang, Lu; Wang, Ke; Xing, Jianfeng; Dong, Yalin
2016-01-01
In this study, a type of multifunctional mixed micelles were prepared by a novel biodegradable amphiphilic polymer (MPEG-SS-2SA) and a multidrug resistance (MDR) reversal agent (d-α-tocopheryl polyethylene glycol succinate, TPGS). The mixed micelles could achieve rapid intracellular drug release and reversal of MDR. First, the amphiphilic polymer, MPEG-SS-2SA, was synthesized through disulfide bonds between poly (ethylene glycol) monomethyl ether (MPEG) and stearic acid (SA). The structure of the obtained polymer was similar to poly (ethylene glycol)-phosphatidylethanolamine (PEG-PE). Then the mixed micelles, MPEG-SS-2SA/TPGS, were prepared by MPEG-SS-2SA and TPGS through the thin film hydration method and loaded paclitaxel (PTX) as the model drug. The in vitro release study revealed that the mixed micelles could rapidly release PTX within 24 h under a reductive environment because of the breaking of disulfide bonds. In cell experiments, the mixed micelles significantly inhibited the activity of mitochondrial respiratory complex II, also reduced the mitochondrial membrane potential, and the content of adenosine triphosphate, thus effectively inhibiting the efflux of PTX from cells. Moreover, in the confocal laser scanning microscopy, cellular uptake and 3-(4,5-dimethyl-thiazol-2-yl)-2,5-diphenyl-tetrazolium bromide assays, the MPEG-SS-2SA/TPGS micelles achieved faster release and more uptake of PTX in Michigan Cancer Foundation-7/PTX cells and showed better antitumor effects as compared with the insensitive control. In conclusion, the biodegradable mixed micelles, MPEG-SS-2SA/TPGS, could be potential vehicles for delivering hydrophobic chemotherapeutic drugs in MDR cancer therapy. PMID:27785018
Zhang, Quan; Yuan, Yi; Li, Su-Bo; Dou, Na; Ma, Fu-Ling; Ji, Shou-Ping
2004-05-01
To find out why mPEG modification of donor's lymphocytes can attenuate the occurrence of graft versus host disease(GVHD), but not affect the hemopoietic reconstitution of stem/progenitor cells after transplanting the mPEG-modified mononuclear cells from human cord blood into the SCID mice. The followings were observed: (1) Changes of CD4(+) and CD8(+) T cells and the ratio of CD4(+)/CD8(+) T cells were examined by flow cytometry before and after mononuclear cells from human cord blood were modified with mPEG. (2) The difference in forming the CFU-GM in-vitro between the mPEG modified-stem/progenitor cell group and non-modified cell group was observed. (3) The time of appearance of GVHD and the survival of the SCID mice were observed after the pre- and post-modification mononuclear cells were transplanted. (4) The number of humanized CD45(+) cells in the mouse's bone marrow was detected about 7 weeks after transplantation. (1) mPEG nearly completely covered up the CD4 and CD8 antigens on T cells, while the number of CFU-GM did not show any obvious change between the modified and non-modified cell groups. (2) GVHD appeared later in the modified mononuclear cell group than in the non-modified group, and the survival rate was elevated in the modified group than in the non-modified group. (3) Humanized CD45 cells were found in mouse's bone marrow at the 47th day after transplantation of both mPEG-modified and non-modified mononuclear cells. After CD4 and CD8 antigens were covered up with mPEG, the graft's immune response against host was weakened, but the proliferation and differentiation of transplanted hemopoietic stem/progenitor cells were not affected.
Zheng, Jia N; Xie, Hong G; Yu, Wei T; Liu, Xiu D; Xie, Wei Y; Zhu, Jing; Ma, Xiao J
2010-11-16
The chemical modification of the alginate/chitosan/alginate (ACA) hydrogel microcapsule with methoxy poly(ethylene glycol) (MPEG) was investigated to reduce nonspecific protein adsorption and improve biocompatibility in vivo. The graft copolymer chitosan-g-MPEG (CS-g-MPEG) was synthesized, and then alginate/chitosan/alginate/CS-g-MPEG (ACAC(PEG)) multilayer hydrogel microcapsules were fabricated by the layer-by-layer (LBL) polyelectrolyte self-assembly method. A quantitative study of the modification was carried out by the gel permeation chromatography (GPC) technique, and protein adsorption on the modified microcapsules was also investigated. The results showed that the apparent graft density of the MPEG side chain on the microcapsules decreased with increases in the degree of substitution (DS) and the MPEG chain length. During the binding process, the apparent graft density of CS-g-MPEG showed rapid growth-plateau-rapid growth behavior. CS-g-MPEG was not only bound to the surface but also penetrated a certain depth into the microcapsule membranes. The copolymers that penetrated the microcapsules made a smaller contribution to protein repulsion than did the copolymers on the surfaces of the microcapsules. The protein repulsion ability decreased with the increase in DS from 7 to 29% with the same chain length of MPEG 2K. CS-g-MPEG with MPEG 2K was more effective at protein repulsion than CS-g-MPEG with MPEG 550, having a similar DS below 20%. In this study, the microcapsules modified with CS-g-MPEG2K-DS7% had the lowest IgG adsorption of 3.0 ± 0.6 μg/cm(2), a reduction of 61% compared to that on the chitosan surface.
Qiu, Jin-Feng; Gao, Xiang; Wang, Bi-Lan; Wei, Xia-Wei; Gou, Ma-Ling; Men, Ke; Liu, Xing-Yu; Guo, Gang; Qian, Zhi-Yong; Huang, Mei-Juan
2013-01-01
Luteolin (Lu) is one of the flavonoids with anticancer activity, but its poor water solubility limits its use clinically. In this work, we used monomethoxy poly(ethylene glycol)-poly(e-caprolactone) (MPEG-PCL) micelles to encapsulate Lu by a self-assembly method, creating a water-soluble Lu/MPEG-PCL micelle. These micelles had a mean particle size of 38.6 ± 0.6 nm (polydispersity index = 0.16 ± 0.02), encapsulation efficiency of 98.32% ± 1.12%, and drug loading of 3.93% ± 0.25%. Lu/MPEG-PCL micelles could slowly release Lu in vitro. Encapsulation of Lu in MPEG-PCL micelles improved the half-life (t½; 152.25 ± 49.92 versus [vs] 7.16 ± 1.23 minutes, P = 0.007), area under the curve (0-t) (2914.05 ± 445.17 vs 502.65 ± 140.12 mg/L/minute, P = 0.001), area under the curve (0–∞) (2989.03 ± 433.22 vs 503.81 ± 141.41 mg/L/minute, P = 0.001), and peak concentration (92.70 ± 11.61 vs 38.98 ± 7.73 mg/L, P = 0.003) of Lu when the drug was intravenously administered at a dose of 30 mg/kg in rats. Also, Lu/MPEG-PCL micelles maintained the cytotoxicity of Lu on 4T1 breast cancer cells (IC50 = 6.4 ± 2.30 μg/mL) and C-26 colon carcinoma cells (IC50 = 12.62 ± 2.17 μg/mL) in vitro. These data suggested that encapsulation of Lu into MPEG-PCL micelles created an aqueous formulation of Lu with potential anticancer effect. PMID:23990719
Droplet Combustion Experiment movie
NASA Technical Reports Server (NTRS)
2003-01-01
The Droplet Combustion Experiment (DCE) was designed to investigate the fundamental combustion aspects of single, isolated droplets under different pressures and ambient oxygen concentrations for a range of droplet sizes varying between 2 and 5 mm. The DCE principal investigator was Forman Williams, University of California, San Diego. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1 mission (STS-83, April 4-8 1997; the shortened mission was reflown as MSL-1R on STS-94). Advanced combustion experiments will be a part of investigations plarned for the International Space Station. (1.1 MB, 12-second MPEG, screen 320 x 240 pixels; downlinked video, higher quality not available)A still JPG composite of this movie is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300164.html.
Video Monitoring a Simulation-Based Quality Improvement Program in Bihar, India.
Dyer, Jessica; Spindler, Hilary; Christmas, Amelia; Shah, Malay Bharat; Morgan, Melissa; Cohen, Susanna R; Sterne, Jason; Mahapatra, Tanmay; Walker, Dilys
2018-04-01
Simulation-based training has become an accepted clinical training andragogy in high-resource settings with its use increasing in low-resource settings. Video recordings of simulated scenarios are commonly used by facilitators. Beyond using the videos during debrief sessions, researchers can also analyze the simulation videos to quantify technical and nontechnical skills during simulated scenarios over time. Little is known about the feasibility and use of large-scale systems to video record and analyze simulation and debriefing data for monitoring and evaluation in low-resource settings. This manuscript describes the process of designing and implementing a large-scale video monitoring system. Mentees and Mentors were consented and all simulations and debriefs conducted at 320 Primary Health Centers (PHCs) were video recorded. The system design, number of video recordings, and inter-rater reliability of the coded videos were assessed. The final dataset included a total of 11,278 videos. Overall, a total of 2,124 simulation videos were coded and 183 (12%) were blindly double-coded. For the double-coded sample, the average inter-rater reliability (IRR) scores were 80% for nontechnical skills, and 94% for clinical technical skills. Among 4,450 long debrief videos received, 216 were selected for coding and all were double-coded. Data quality of simulation videos was found to be very good in terms of recorded instances of "unable to see" and "unable to hear" in Phases 1 and 2. This study demonstrates that video monitoring systems can be effectively implemented at scale in resource limited settings. Further, video monitoring systems can play several vital roles within program implementation, including monitoring and evaluation, provision of actionable feedback to program implementers, and assurance of program fidelity.
Detection of goal events in soccer videos
NASA Astrophysics Data System (ADS)
Kim, Hyoung-Gook; Roeber, Steffen; Samour, Amjad; Sikora, Thomas
2005-01-01
In this paper, we present an automatic extraction of goal events in soccer videos by using audio track features alone without relying on expensive-to-compute video track features. The extracted goal events can be used for high-level indexing and selective browsing of soccer videos. The detection of soccer video highlights using audio contents comprises three steps: 1) extraction of audio features from a video sequence, 2) event candidate detection of highlight events based on the information provided by the feature extraction Methods and the Hidden Markov Model (HMM), 3) goal event selection to finally determine the video intervals to be included in the summary. For this purpose we compared the performance of the well known Mel-scale Frequency Cepstral Coefficients (MFCC) feature extraction method vs. MPEG-7 Audio Spectrum Projection feature (ASP) extraction method based on three different decomposition methods namely Principal Component Analysis( PCA), Independent Component Analysis (ICA) and Non-Negative Matrix Factorization (NMF). To evaluate our system we collected five soccer game videos from various sources. In total we have seven hours of soccer games consisting of eight gigabytes of data. One of five soccer games is used as the training data (e.g., announcers' excited speech, audience ambient speech noise, audience clapping, environmental sounds). Our goal event detection results are encouraging.
Content-based retrieval using MPEG-7 visual descriptor and hippocampal neural network
NASA Astrophysics Data System (ADS)
Kim, Young Ho; Joung, Lyang-Jae; Kang, Dae-Seong
2005-12-01
As development of digital technology, many kinds of multimedia data are used variously and requirements for effective use by user are increasing. In order to transfer information fast and precisely what user wants, effective retrieval method is required. As existing multimedia data are impossible to apply the MPEG-1, MPEG-2 and MPEG-4 technologies which are aimed at compression, store and transmission. So MPEG-7 is introduced as a new technology for effective management and retrieval for multimedia data. In this paper, we extract content-based features using color descriptor among the MPEG-7 standardization visual descriptor, and reduce feature data applying PCA(Principal Components Analysis) technique. We remodel the cerebral cortex and hippocampal neural networks as a principle of a human's brain and it can label the features of the image-data which are inputted according to the order of hippocampal neuron structure to reaction-pattern according to the adjustment of a good impression in Dentate gyrus region and remove the noise through the auto-associate- memory step in the CA3 region. In the CA1 region receiving the information of the CA3, it can make long-term or short-term memory learned by neuron. Hippocampal neural network makes neuron of the neural network separate and combine dynamically, expand the neuron attaching additional information using the synapse and add new features according to the situation by user's demand. When user is querying, it compares feature value stored in long-term memory first and it learns feature vector fast and construct optimized feature. So the speed of index and retrieval is fast. Also, it uses MPEG-7 standard visual descriptors as content-based feature value, it improves retrieval efficiency.
Characterizing region of interest in image using MPEG-7 visual descriptors
NASA Astrophysics Data System (ADS)
Ryu, Min-Sung; Park, Soo-Jun; Won, Chee Sun
2005-08-01
In this paper, we propose a region-based image retrieval system using EHD (Edge Histogram Descriptor) and CLD (Color Layout Descriptor) of MPEG-7 descriptors. The combined descriptor can efficiently describe edge and color features in terms of sub-image regions. That is, the basic unit for the selection of the region-of-interest (ROI) in the image is the sub-image block of the EHD, which corresponds to 16 (i.e., 4x4) non-overlapping image blocks in the image space. This implies that, to have a one-to-one region correspondence between EHD and CLD, we need to take an 8x8 inverse DCT (IDCT) for the CLD. Experimental results show that the proposed retrieval scheme can be used for image retrieval with the ROI based image retrieval for MPEG-7 indexed images.
Yadav, Khushwant S; Jacob, Sheeba; Sachdeva, Geetanjali; Sawant, Krutika K
2011-08-01
The preferred delivery systems for anticancer drugs would be the one which would have selective and effective destruction of cancer cells. In the present study etoposide (ETO) loaded nanoparticles (NP) were prepared using PLGA (ETO-PLGA NP), PLGA-MPEG block copolymer (ETO-PLGA-MPEG NP) and PLGA-Pluronic copolymer (ETO-PLGA-PLU NP) and they were evaluated for cytotoxicity and cellular uptake studies using two cancer cell lines, L1210 and DU145. The IC50 values for L1210 cells were 18.0, 6.2, 4.8 and 5.4 microM and for DU145 cells the IC50 values were 98.4, 75.1, 60.1 and 71.3 microM for ETO, ETO-PLGA NP, ETO-PLGA-MPEG NP and ETO-PLGA-PLU NP respectively. The increased cytotoxicities were attributed to increased uptake of the NPs by the cells. Moreover the ETO loaded PLGA-MPEG NP and PLGA-Pluronic NP showed a sustained cytotoxic effect till 5 days on both the cell lines. Results of the long term cytotoxicity study concluded that the drug loaded PLGA nanoparticulate formulations were efficient in decreasing the viability of the L1210 cells over a period of three days, whereas the pure drug exerted its maximum efficiency on the day one itself. Z-stack confocal images of NPs showed fluorescence activity in each section of DU 145 and L1210 cells indicating that the nanoparticles were internalized by the cells. The study concluded that ETO loaded PLGA NPs had higher cytotoxicity compared with that of the free drug and ETO-PLGA-MPEG NP and ETO-PLGA-PLU NP had higher cell uptake efficiency compared with that of ETO-PLGA NP. The developed PLGA based NPs shows promise to be used for cancer therapy.
Scalable L-infinite coding of meshes.
Munteanu, Adrian; Cernea, Dan C; Alecu, Alin; Cornelis, Jan; Schelkens, Peter
2010-01-01
The paper investigates the novel concept of local-error control in mesh geometry encoding. In contrast to traditional mesh-coding systems that use the mean-square error as target distortion metric, this paper proposes a new L-infinite mesh-coding approach, for which the target distortion metric is the L-infinite distortion. In this context, a novel wavelet-based L-infinite-constrained coding approach for meshes is proposed, which ensures that the maximum error between the vertex positions in the original and decoded meshes is lower than a given upper bound. Furthermore, the proposed system achieves scalability in L-infinite sense, that is, any decoding of the input stream will correspond to a perfectly predictable L-infinite distortion upper bound. An instantiation of the proposed L-infinite-coding approach is demonstrated for MESHGRID, which is a scalable 3D object encoding system, part of MPEG-4 AFX. In this context, the advantages of scalable L-infinite coding over L-2-oriented coding are experimentally demonstrated. One concludes that the proposed L-infinite mesh-coding approach guarantees an upper bound on the local error in the decoded mesh, it enables a fast real-time implementation of the rate allocation, and it preserves all the scalability features and animation capabilities of the employed scalable mesh codec.
Purification and Characterization of Methyl Phthalyl Ethyl Glycolate (MPEG)
2014-11-21
LIST OF FIGURES Heading Page Figure 1. Monsanto Method of MPEG Synthesis 2 Figure 2. Incon Method of MPEG Synthesis 2 Figure 3. Possible...least 1942 (Van Antwerpen, 1942), known then as Santicizer M-17 by the Monsanto Chemical Company. MPEG is used in HES 5808, a high solids-loading...characteristics vary from lot to lot. This situation has emerged since Monsanto no longer produces MPEG, and alternative vendors are currently being used
Dual-Layer Video Encryption using RSA Algorithm
NASA Astrophysics Data System (ADS)
Chadha, Aman; Mallik, Sushmit; Chadha, Ankit; Johar, Ravdeep; Mani Roja, M.
2015-04-01
This paper proposes a video encryption algorithm using RSA and Pseudo Noise (PN) sequence, aimed at applications requiring sensitive video information transfers. The system is primarily designed to work with files encoded using the Audio Video Interleaved (AVI) codec, although it can be easily ported for use with Moving Picture Experts Group (MPEG) encoded files. The audio and video components of the source separately undergo two layers of encryption to ensure a reasonable level of security. Encryption of the video component involves applying the RSA algorithm followed by the PN-based encryption. Similarly, the audio component is first encrypted using PN and further subjected to encryption using the Discrete Cosine Transform. Combining these techniques, an efficient system, invulnerable to security breaches and attacks with favorable values of parameters such as encryption/decryption speed, encryption/decryption ratio and visual degradation; has been put forth. For applications requiring encryption of sensitive data wherein stringent security requirements are of prime concern, the system is found to yield negligible similarities in visual perception between the original and the encrypted video sequence. For applications wherein visual similarity is not of major concern, we limit the encryption task to a single level of encryption which is accomplished by using RSA, thereby quickening the encryption process. Although some similarity between the original and encrypted video is observed in this case, it is not enough to comprehend the happenings in the video.
37 CFR 202.20 - Deposit of copies and phonorecords for copyright registration.
Code of Federal Regulations, 2013 CFR
2013-07-01
...; HTML; WAV; and MPEG family of formats, including MP3. This list of file formats is non-exhaustive and... distributed and used in such a manner that ownership and control of copies remain with the test sponsor or...” shall mean one of the following: (1) The first and last 25 pages or equivalent units of the source code...
The interactive contents authoring system for terrestrial digital multimedia broadcasting
NASA Astrophysics Data System (ADS)
Cheong, Won-Sik; Ahn, Sangwoo; Cha, Jihun; Moon, Kyung Ae
2007-02-01
This paper introduces an interactive contents authoring system which can easily and conveniently produce interactive contents for the Terrestrial Digital Multimedia Broadcasting (T-DMB). For interactive broadcasting service, T-DMB adopted MPEG-4 Systems technology. In order to the interactive service becomes flourishing on the market, various types of interactive contents should be well provided prior to the service. In MPEG-4 Systems specification, broadcasting contents are described by the combination of a large number of nodes, routes and descriptors. In order to provide interactive data services through the T-DMB network, it is essential to have an interactive contents authoring system which allows contents authors to compose interactive contents easily and conveniently even if they lack any background on MPEG-4 Systems technology. The introduced authoring system provides powerful graphical user interface and produces interactive broadcasting contents in the forms of binary and textual format. Therefore, the interactive contents authoring system presented in this paper would vastly contribute to the flourishing interactive service.
NASA Technical Reports Server (NTRS)
Garcia, M. J.; Thomas, J. D.; Greenberg, N.; Sandelski, J.; Herrera, C.; Mudd, C.; Wicks, J.; Spencer, K.; Neumann, A.; Sankpal, B.;
2001-01-01
Digital format is rapidly emerging as a preferred method for displaying and retrieving echocardiographic studies. The qualitative diagnostic accuracy of Moving Pictures Experts Group (MPEG-1) compressed digital echocardiographic studies has been previously reported. The goals of the present study were to compare quantitative measurements derived from MPEG-1 recordings with the super-VHS (sVHS) videotape clinical standard. Six reviewers performed blinded measurements from still-frame images selected from 20 echocardiographic studies that were simultaneously acquired in sVHS and MPEG-1 formats. Measurements were obtainable in 1401 (95%) of 1486 MPEG-1 variables compared with 1356 (91%) of 1486 sVHS variables (P <.001). Excellent agreement existed between MPEG-1 and sVHS 2-dimensional linear measurements (r = 0.97; MPEG-1 = 0.95[sVHS] + 1.1 mm; P <.001; Delta = 9% +/- 10%), 2-dimensional area measurements (r = 0.89), color jet areas (r = 0.87, p <.001), and Doppler velocities (r = 0.92, p <.001). Interobserver variability was similar for both sVHS and MPEG-1 readings. Our results indicate that quantitative off-line measurements from MPEG-1 digitized echocardiographic studies are feasible and comparable to those obtained from sVHS.
Quality metric for spherical panoramic video
NASA Astrophysics Data System (ADS)
Zakharchenko, Vladyslav; Choi, Kwang Pyo; Park, Jeong Hoon
2016-09-01
Virtual reality (VR)/ augmented reality (AR) applications allow users to view artificial content of a surrounding space simulating presence effect with a help of special applications or devices. Synthetic contents production is well known process form computer graphics domain and pipeline has been already fixed in the industry. However emerging multimedia formats for immersive entertainment applications such as free-viewpoint television (FTV) or spherical panoramic video require different approaches in content management and quality assessment. The international standardization on FTV has been promoted by MPEG. This paper is dedicated to discussion of immersive media distribution format and quality estimation process. Accuracy and reliability of the proposed objective quality estimation method had been verified with spherical panoramic images demonstrating good correlation results with subjective quality estimation held by a group of experts.
Performance of the JPEG Estimated Spectrum Adaptive Postfilter (JPEG-ESAP) for Low Bit Rates
NASA Technical Reports Server (NTRS)
Linares, Irving (Inventor)
2016-01-01
Frequency-based, pixel-adaptive filtering using the JPEG-ESAP algorithm for low bit rate JPEG formatted color images may allow for more compressed images while maintaining equivalent quality at a smaller file size or bitrate. For RGB, an image is decomposed into three color bands--red, green, and blue. The JPEG-ESAP algorithm is then applied to each band (e.g., once for red, once for green, and once for blue) and the output of each application of the algorithm is rebuilt as a single color image. The ESAP algorithm may be repeatedly applied to MPEG-2 video frames to reduce their bit rate by a factor of 2 or 3, while maintaining equivalent video quality, both perceptually, and objectively, as recorded in the computed PSNR values.
Gallium arsenide processing elements for motion estimation full-search algorithm
NASA Astrophysics Data System (ADS)
Lopez, Jose F.; Cortes, P.; Lopez, S.; Sarmiento, Roberto
2001-11-01
The Block-Matching motion estimation algorithm (BMA) is the most popular method for motion-compensated coding of image sequence. Among the several possible searching methods to compute this algorithm, the full-search BMA (FBMA) has obtained great interest from the scientific community due to its regularity, optimal solution and low control overhead which simplifies its VLSI realization. On the other hand, its main drawback is the demand of an enormous amount of computation. There are different ways of overcoming this factor, being the use of advanced technologies, such as Gallium Arsenide (GaAs), the one adopted in this article together with different techniques to reduce area overhead. By exploiting GaAs properties, improvements can be obtained in the implementation of feasible systems for real time video compression architectures. Different primitives used in the implementation of processing elements (PE) for a FBMA scheme are presented. As a result, Pes running at 270 MHz have been developed in order to study its functionality and performance. From these results, an implementation for MPEG applications is proposed, leading to an architecture running at 145 MHz with a power dissipation of 3.48 W and an area of 11.5 mm2.
Overview: DVD-video disc set of seafloor transects during USGS research cruises in the Pacific Ocean
Chezar, Henry; Newman, Ivy
2006-01-01
Many USGS research programs involve the gathering of underwater seafloor video footage. This footage was captured on a variety of media, including Beta III and VHS tapes. Much of this media is now deteriorating, prompting the migration of this video footage onto DVD-Video discs. Advantages of using DVD-Video discs are: less storage space, ease of transport, wider distribution, and non-degradational viewing of the media. The videos in this particular collection (328 of them) were made on the ocean floor under President Reagan's Exclusive Economic Zone proclamation of 1983. There are now five copies of these 328 discs in existence: at the USGS libraries in Menlo Park, Calif., Denver, Colo., and Reston, Va.; at the USGS Publications Warehouse (masters from which to make copies for customers); and Hank Chezar's USGS Western Coastal and Marine Geology team archives. The purpose of Open-File Report 2004-1101 is to provide users with a listing of the available DVD-Video discs (with their Open-File Report numbers) along with a brief description of their associated USGS research activities. Each disc was created by first encoding the source video and audio into MPEG-2 streams using the MediaPress Pro hardware encoder. A menu for the disc was then made using Adobe Photoshop 6.0. The disc was then authored using DVD Studio Pro and subsequently written onto a DVD-R recordable disc.
Issues and solutions for storage, retrieval, and searching of MPEG-7 documents
NASA Astrophysics Data System (ADS)
Chang, Yuan-Chi; Lo, Ming-Ling; Smith, John R.
2000-10-01
The ongoing MPEG-7 standardization activity aims at creating a standard for describing multimedia content in order to facilitate the interpretation of the associated information content. Attempting to address a broad range of applications, MPEG-7 has defined a flexible framework consisting of Descriptors, Description Schemes, and Description Definition Language. Descriptors and Description Schemes describe features, structure and semantics of multimedia objects. They are written in the Description Definition Language (DDL). In the most recent revision, DDL applies XML (Extensible Markup Language) Schema with MPEG-7 extensions. DDL has constructs that support inclusion, inheritance, reference, enumeration, choice, sequence, and abstract type of Description Schemes and Descriptors. In order to enable multimedia systems to use MPEG-7, a number of important problems in storing, retrieving and searching MPEG-7 documents need to be solved. This paper reports on initial finding on issues and solutions of storing and accessing MPEG-7 documents. In particular, we discuss the benefits of using a virtual document management framework based on XML Access Server (XAS) in order to bridge the MPEG-7 multimedia applications and database systems. The need arises partly because MPEG-7 descriptions need customized storage schema, indexing and search engines. We also discuss issues arising in managing dependence and cross-description scheme search.
Curcumin-loaded biodegradable polymeric micelles for colon cancer therapy in vitro and in vivo
NASA Astrophysics Data System (ADS)
Gou, Maling; Men, Ke; Shi, Huashan; Xiang, Mingli; Zhang, Juan; Song, Jia; Long, Jianlin; Wan, Yang; Luo, Feng; Zhao, Xia; Qian, Zhiyong
2011-04-01
Curcumin is an effective and safe anticancer agent, but its hydrophobicity inhibits its clinical application. Nanotechnology provides an effective method to improve the water solubility of hydrophobic drug. In this work, curcumin was encapsulated into monomethoxy poly(ethylene glycol)-poly(ε-caprolactone) (MPEG-PCL) micelles through a single-step nano-precipitation method, creating curcumin-loaded MPEG-PCL (Cur/MPEG-PCL) micelles. These Cur/MPEG-PCL micelles were monodisperse (PDI = 0.097 +/- 0.011) with a mean particle size of 27.3 +/- 1.3 nm, good re-solubility after freeze-drying, an encapsulation efficiency of 99.16 +/- 1.02%, and drug loading of 12.95 +/- 0.15%. Moreover, these micelles were prepared by a simple and reproducible procedure, making them potentially suitable for scale-up. Curcumin was molecularly dispersed in the PCL core of MPEG-PCL micelles, and could be slow-released in vitro. Encapsulation of curcumin in MPEG-PCL micelles improved the t1/2 and AUC of curcuminin vivo. As well as free curcumin, Cur/MPEG-PCL micelles efficiently inhibited the angiogenesis on transgenic zebrafish model. In an alginate-encapsulated cancer cell assay, intravenous application of Cur/MPEG-PCL micelles more efficiently inhibited the tumor cell-induced angiogenesisin vivo than that of free curcumin. MPEG-PCL micelle-encapsulated curcumin maintained the cytotoxicity of curcumin on C-26 colon carcinoma cellsin vitro. Intravenous application of Cur/MPEG-PCL micelle (25 mg kg-1curcumin) inhibited the growth of subcutaneous C-26 colon carcinoma in vivo (p < 0.01), and induced a stronger anticancer effect than that of free curcumin (p < 0.05). In conclusion, Cur/MPEG-PCL micelles are an excellent intravenously injectable aqueous formulation of curcumin; this formulation can inhibit the growth of colon carcinoma through inhibiting angiogenesis and directly killing cancer cells.
High efficiency video coding for ultrasound video communication in m-health systems.
Panayides, A; Antoniou, Z; Pattichis, M S; Pattichis, C S; Constantinides, A G
2012-01-01
Emerging high efficiency video compression methods and wider availability of wireless network infrastructure will significantly advance existing m-health applications. For medical video communications, the emerging video compression and network standards support low-delay and high-resolution video transmission, at the clinically acquired resolution and frame rates. Such advances are expected to further promote the adoption of m-health systems for remote diagnosis and emergency incidents in daily clinical practice. This paper compares the performance of the emerging high efficiency video coding (HEVC) standard to the current state-of-the-art H.264/AVC standard. The experimental evaluation, based on five atherosclerotic plaque ultrasound videos encoded at QCIF, CIF, and 4CIF resolutions demonstrates that 50% reductions in bitrate requirements is possible for equivalent clinical quality.
NASA Astrophysics Data System (ADS)
Chen, Kuizhi; Pan, Sujuan; Zhuang, Xuemei; Lv, Hafei; Que, Shoulin; Xie, Shusen; Yang, Hongqin; Peng, Yiru
2016-07-01
1-2 generation poly(benzyl aryl ether) dendrimer silicon phthalocyanines with axially disubstituted cyano terminal functionalities (G n -DSiPc(CN)4 n , (G n = n-generation dendrimer, n = 1-2)) were synthesized. Their structures were characterized by elemental analysis, IR, 1H NMR, and ESI-MS. Polymeric nanoparticles (G n -DSiPc(CN)4 n /m) were formed through encapsulating G n -DSiPc(CN)4 n into three monomethoxyl poly(ethylene glycol)-poly(ɛ-caprolactone) diblock copolymers (MPEG-PCL) with different hydrophilic/hydrophobic proportion, respectively. The effect of dendritic generation and the hydrophilic/hydrophobic proportion of diblock copolymers on the UV/Vis and fluorescence spectra of G n -DSiPc(CN)4 n and G n -DSiPc(CN)4 n /m were studied. The photophysical properties of polymeric nanoparticles exhibited dendritic generation and hydrophilic/hydrophobic proportion dependence. The fluorescence intensities and lifetimes of G n -DSiPc(CN)4 n /m were lower than the corresponding free dendrimer phthalocyanines. G n -DSiPc(CN)4 n encapsulated into MPEG-PCL with hydrophilic/hydrophobic molecular weight ratio 2000:4000 exhibited excellent photophysical property. The mean diameter of MPEG2000-PCL2000 micelles was about 70 nm, which decreased when loaded with G n -DSiPc(CN)4 n .
Treating acute cystitis with biodegradable micelle-encapsulated quercetin
Wang, Bi Lan; Gao, Xiang; Men, Ke; Qiu, Jinfeng; Yang, Bowen; Gou, Ma Ling; Huang, Mei Juan; Huang, Ning; Qian, Zhi Yong; Zhao, Xia; Wei, Yu Quan
2012-01-01
Intravesical application of an anti-inflammatory drug is an efficient strategy for acute cystitis therapy. Quercetin (QU) is a potent anti-inflammatory agent; however, its poor water solubility restricts its clinical application. In an attempt to improve water solubility of QU, biodegradable monomethoxy poly(ethylene glycol)-poly(ɛ-caprolactone) (MPEG-PCL) micelles were used to encapsulate QU by self-assembly methods, creating QU/MPEG-PCL micelles. These QU/MPEG-PCL micelles with DL of 7% had a mean particle size of <34 nm, and could release QU for an extended period in vitro. The in vivo study indicated that intravesical application of MPEG-PCL micelles did not induce any toxicity to the bladder, and could efficiently deliver cargo to the bladder. Moreover, the therapeutic efficiency of intravesical administration of QU/MPEG-PCL micelles on acute cystitis was evaluated in vivo. Results indicated that QU/MPEG-PCL micelle treatment efficiently reduced the edema and inflammatory cell infiltration of the bladder in an Escherichia coli-induced acute cystitis model. These data suggested that MPEG-PCL micelle was a candidate intravesical drug carrier, and QU/MPEG-PCL micelles may have potential application in acute cystitis therapy. PMID:22661886
Layer-based buffer aware rate adaptation design for SHVC video streaming
NASA Astrophysics Data System (ADS)
Gudumasu, Srinivas; Hamza, Ahmed; Asbun, Eduardo; He, Yong; Ye, Yan
2016-09-01
This paper proposes a layer based buffer aware rate adaptation design which is able to avoid abrupt video quality fluctuation, reduce re-buffering latency and improve bandwidth utilization when compared to a conventional simulcast based adaptive streaming system. The proposed adaptation design schedules DASH segment requests based on the estimated bandwidth, dependencies among video layers and layer buffer fullness. Scalable HEVC video coding is the latest state-of-art video coding technique that can alleviate various issues caused by simulcast based adaptive video streaming. With scalable coded video streams, the video is encoded once into a number of layers representing different qualities and/or resolutions: a base layer (BL) and one or more enhancement layers (EL), each incrementally enhancing the quality of the lower layers. Such layer based coding structure allows fine granularity rate adaptation for the video streaming applications. Two video streaming use cases are presented in this paper. The first use case is to stream HD SHVC video over a wireless network where available bandwidth varies, and the performance comparison between proposed layer-based streaming approach and conventional simulcast streaming approach is provided. The second use case is to stream 4K/UHD SHVC video over a hybrid access network that consists of a 5G millimeter wave high-speed wireless link and a conventional wired or WiFi network. The simulation results verify that the proposed layer based rate adaptation approach is able to utilize the bandwidth more efficiently. As a result, a more consistent viewing experience with higher quality video content and minimal video quality fluctuations can be presented to the user.
Protection and governance of MPEG-21 music player MAF contents using MPEG-21 IPMP tools
NASA Astrophysics Data System (ADS)
Hendry; Kim, Munchurl
2006-02-01
MPEG (Moving Picture Experts Groups) is currently standardizing Multimedia Application Format (MAF) which targets to provide simple but practical multimedia applications to the industry. One of the interesting and on-going working items of MAF activity is the so-called Music Player MAF which combines MPEG-1/2 layer III (MP3), JPEG image, and metadata into a standard format. In this paper, we propose a protection and governance mechanism to the Music Player MAF by incorporating other MPEG technology, MPEG-21 IPMP (Intellectual Property Management and Protection). We show, in this paper, use-case of the distribution and consumption of this Music Player contents, requirements, and how this protection and governance can be implemented in conjunction with the current Music Player MAF architecture and file system. With the use of MPEG-21 IPMP, the protection and governance to the content of Music Player MAF fulfils flexibility, extensibility, and granular in protection requirements.
NASA Astrophysics Data System (ADS)
Manjanaik, N.; Parameshachari, B. D.; Hanumanthappa, S. N.; Banu, Reshma
2017-08-01
Intra prediction process of H.264 video coding standard used to code first frame i.e. Intra frame of video to obtain good coding efficiency compare to previous video coding standard series. More benefit of intra frame coding is to reduce spatial pixel redundancy with in current frame, reduces computational complexity and provides better rate distortion performance. To code Intra frame it use existing process Rate Distortion Optimization (RDO) method. This method increases computational complexity, increases in bit rate and reduces picture quality so it is difficult to implement in real time applications, so the many researcher has been developed fast mode decision algorithm for coding of intra frame. The previous work carried on Intra frame coding in H.264 standard using fast decision mode intra prediction algorithm based on different techniques was achieved increased in bit rate, degradation of picture quality(PSNR) for different quantization parameters. Many previous approaches of fast mode decision algorithms on intra frame coding achieved only reduction of computational complexity or it save encoding time and limitation was increase in bit rate with loss of quality of picture. In order to avoid increase in bit rate and loss of picture quality a better approach was developed. In this paper developed a better approach i.e. Gaussian pulse for Intra frame coding using diagonal down left intra prediction mode to achieve higher coding efficiency in terms of PSNR and bitrate. In proposed method Gaussian pulse is multiplied with each 4x4 frequency domain coefficients of 4x4 sub macro block of macro block of current frame before quantization process. Multiplication of Gaussian pulse for each 4x4 integer transformed coefficients at macro block levels scales the information of the coefficients in a reversible manner. The resulting signal would turn abstract. Frequency samples are abstract in a known and controllable manner without intermixing of coefficients, it avoids picture getting bad hit for higher values of quantization parameters. The proposed work was implemented using MATLAB and JM 18.6 reference software. The proposed work measure the performance parameters PSNR, bit rate and compression of intra frame of yuv video sequences in QCIF resolution under different values of quantization parameter with Gaussian value for diagonal down left intra prediction mode. The simulation results of proposed algorithm are tabulated and compared with previous algorithm i.e. Tian et al method. The proposed algorithm achieved reduced in bit rate averagely 30.98% and maintain consistent picture quality for QCIF sequences compared to previous algorithm i.e. Tian et al method.
Umeda, Akira; Iwata, Yasushi; Okada, Yasumasa; Shimada, Megumi; Baba, Akiyasu; Minatogawa, Yasuyuki; Yamada, Takayasu; Chino, Masao; Watanabe, Takafumi; Akaishi, Makoto
2004-12-01
The high cost of digital echocardiographs and the large size of data files hinder the adoption of remote diagnosis of digitized echocardiography data. We have developed a low-cost digital filing system for echocardiography data. In this system, data from a conventional analog echocardiograph are captured using a personal computer (PC) equipped with an analog-to-digital converter board. Motion picture data are promptly compressed using a moving pictures expert group (MPEG) 4 codec. The digitized data with preliminary reports obtained in a rural hospital are then sent to cardiologists at distant urban general hospitals via the internet. The cardiologists can evaluate the data using widely available movie-viewing software (Windows Media Player). The diagnostic accuracy of this double-check system was confirmed by comparison with ordinary super-VHS videotapes. We have demonstrated that digitization of echocardiography data from a conventional analog echocardiograph and MPEG 4 compression can be performed using an ordinary PC-based system, and that this system enables highly efficient digital storage and remote diagnosis at low cost.
Li, Zhen; Chen, Qixian; Qi, Yan; Liu, Zhihao; Hao, Tangna; Sun, Xiaoxin; Qiao, Mingxi; Ma, Xiaodong; Xu, Ting; Zhao, Xiuli; Yang, Chunrong; Chen, Dawei
2018-04-11
A multifunctional nanoparticulate system composed of methoxy poly(ethylene glycol)-poly(l-histidine)-d-α-vitamin E succinate (MPEG-PLH-VES) copolymers for encapsulation of doxorubicin (DOX) was elaborated with the aim of circumventing the multidrug resistance (MDR) in breast cancer treatment. The MPEG-PLH-VES nanoparticles (NPs) were subsequently functionalized with biotin motif for targeted drug delivery. The MPEG-PLH-VES copolymer exerts no obvious effect on the P-gp expression level of MCF-7/ADR but exhibited a significant influence on the loss of mitochondrial membrane potential, the reduction of intracellular ATP level, and the inhibition of P-gp ATPase activity of MCF-7/ADR cells. The constructed MPEG-PLH-VES NPs exhibited an acidic pH-induced increase on particle size in aqueous solution. The DOX-encapsulated MPEG-PLH-VES/biotin-PEG-VES (MPEG-PLH-VES/B) NPs were characterized to possess high drug encapsulation efficiency of approximate 90%, an average particle size of approximately 130 nm, and a pH-responsive drug release profile in acidic milieu. Confocal laser scanning microscopy (CLSM) investigations revealed that the DOX-loaded NPs resulted in an effective delivery of DOX into MCF-/ADR cells and a notable carrier-facilitated escape from endolysosomal entrapment. Pertaining to the in vitro cytotoxicity evaluation, the DOX-loaded MPEG-PLH-VES/B NPs resulted in more pronounced cytotoxicity to MCF-/ADR cells compared with DOX-loaded MPEG-PLH-VES NPs and free DOX solution. In vivo imaging study in MCF-7/ADR tumor-engrafted mice exhibited that the MPEG-PLH-VES/B NPs accumulated at the tumor site more effectively than MPEG-PLH-VES NPs due to the biotin-mediated active targeting effect. In accordance with the in vitro results, DOX-loaded MPEG-PLH-VES/B NPs showed the strongest inhibitory effect against the MCF-7/ADR xenografted tumors with negligible systemic toxicity, as evidenced by the histological analysis and change of body weight. The multifunctional MPEG-PLH-VES/B nanoparticulate system has been demonstrated to provide a promising strategy for efficient delivery of DOX into MCF-7/ADR cancerous cells and reversing MDR.
Curcumin-loaded biodegradable polymeric micelles for colon cancer therapy in vitro and in vivo.
Gou, MaLing; Men, Ke; Shi, HuaShan; Xiang, MingLi; Zhang, Juan; Song, Jia; Long, JianLin; Wan, Yang; Luo, Feng; Zhao, Xia; Qian, ZhiYong
2011-04-01
Curcumin is an effective and safe anticancer agent, but its hydrophobicity inhibits its clinical application. Nanotechnology provides an effective method to improve the water solubility of hydrophobic drug. In this work, curcumin was encapsulated into monomethoxy poly(ethylene glycol)-poly(ε-caprolactone) (MPEG-PCL) micelles through a single-step nano-precipitation method, creating curcumin-loaded MPEG-PCL (Cur/MPEG-PCL) micelles. These Cur/MPEG-PCL micelles were monodisperse (PDI = 0.097 ± 0.011) with a mean particle size of 27.3 ± 1.3 nm, good re-solubility after freeze-drying, an encapsulation efficiency of 99.16 ± 1.02%, and drug loading of 12.95 ± 0.15%. Moreover, these micelles were prepared by a simple and reproducible procedure, making them potentially suitable for scale-up. Curcumin was molecularly dispersed in the PCL core of MPEG-PCL micelles, and could be slow-released in vitro. Encapsulation of curcumin in MPEG-PCL micelles improved the t(1/2) and AUC of curcumin in vivo. As well as free curcumin, Cur/MPEG-PCL micelles efficiently inhibited the angiogenesis on transgenic zebrafish model. In an alginate-encapsulated cancer cell assay, intravenous application of Cur/MPEG-PCL micelles more efficiently inhibited the tumor cell-induced angiogenesis in vivo than that of free curcumin. MPEG-PCL micelle-encapsulated curcumin maintained the cytotoxicity of curcumin on C-26 colon carcinoma cells in vitro. Intravenous application of Cur/MPEG-PCL micelle (25 mg kg(-1) curcumin) inhibited the growth of subcutaneous C-26 colon carcinoma in vivo (p < 0.01), and induced a stronger anticancer effect than that of free curcumin (p < 0.05). In conclusion, Cur/MPEG-PCL micelles are an excellent intravenously injectable aqueous formulation of curcumin; this formulation can inhibit the growth of colon carcinoma through inhibiting angiogenesis and directly killing cancer cells.
Exploring system interconnection architectures with VIPACES: from direct connections to NOCs
NASA Astrophysics Data System (ADS)
Sánchez-Peña, Armando; Carballo, Pedro P.; Núñez, Antonio
2007-05-01
This paper presents a simple environment for the verification of AMBA 3 AXI systems in Verification IP (VIP) production called VIPACES (Verification Interface Primitives for the development of AXI Compliant Elements and Systems). These primitives are presented as a not compiled library written in SystemC where interfaces are the core of the library. The definition of interfaces instead of generic modules let the user construct custom modules improving the resources spent during the verification phase as well as easily adapting his modules to the AMBA 3 AXI protocol. This topic is the main discussion in the VIPACES library. The paper focuses on comparing and contrasting the main interconnection schemes for AMBA 3 AXI as modeled by VIPACES. For assessing these results we propose a validation scenario with a particular architecture belonging to the domain of MPEG4 video decoding, which is compound by an AXI bus connecting an IDCT and other processing resources.
Chu, BingYang; Zhang, Lan; Qu, Ying; Chen, XiaoXin; Peng, JinRong; Huang, YiXing; Qian, ZhiYong
2016-01-01
Amphiphilic block copolymers have attracted a great deal of attention in drug delivery systems. In this work, a series of monomethoxy-poly (ethylene glycol)-poly (ε-caprolactone-co-D,L-lactide) (MPEG-PCLA) copolymers with variable composition of poly (ε-caprolactone) (PCL) and poly (D,L-lactide) (PDLLA) were prepared via ring-opening copolymerization of ε-CL and D,L-LA in the presence of MPEG and stannous octoate. The structure and molecular weight were characterized by nuclear magnetic resonance (NMR) and gel permeation chromatography (GPC). The crystallinity, hydrophilicity, thermal stability and hydrolytic degradation behavior were investigated in detail, respectively. The results showed that the prepared amphiphilic MPEG-PCLA copolymers have adjustable properties by altering the composition of PCLA, which make it convenient for clinical applications. Besides, the drug loading properties were also studied. Docetaxel (DTX) could be entrapped in MPEG-PCLA micelles with high loading capacity and encapsulation efficiency. And all lyophilized DTX-loaded MPEG-PCLA micelles except MPEG-PCL micelles were readily re-dissolved in normal saline at 25 °C. In addition, DTX-loaded MPEG-PCLA micelles showed a slightly enhanced antitumor activity compared with free DTX. Furthermore, DTX micelles exhibited a slower and sustained release behavior in vitro, and higher DTX concentration and longer retention time in vivo. The results suggested that the MPEG-PCLA copolymer with the adjustable ratio of PCL to PDLLA may be a promising drug delivery carrier for DTX. PMID:27677842
Selective encryption for H.264/AVC video coding
NASA Astrophysics Data System (ADS)
Shi, Tuo; King, Brian; Salama, Paul
2006-02-01
Due to the ease with which digital data can be manipulated and due to the ongoing advancements that have brought us closer to pervasive computing, the secure delivery of video and images has become a challenging problem. Despite the advantages and opportunities that digital video provide, illegal copying and distribution as well as plagiarism of digital audio, images, and video is still ongoing. In this paper we describe two techniques for securing H.264 coded video streams. The first technique, SEH264Algorithm1, groups the data into the following blocks of data: (1) a block that contains the sequence parameter set and the picture parameter set, (2) a block containing a compressed intra coded frame, (3) a block containing the slice header of a P slice, all the headers of the macroblock within the same P slice, and all the luma and chroma DC coefficients belonging to the all the macroblocks within the same slice, (4) a block containing all the ac coefficients, and (5) a block containing all the motion vectors. The first three are encrypted whereas the last two are not. The second method, SEH264Algorithm2, relies on the use of multiple slices per coded frame. The algorithm searches the compressed video sequence for start codes (0x000001) and then encrypts the next N bits of data.
Performance evaluation of the intra compression in the video coding standards
NASA Astrophysics Data System (ADS)
Abramowski, Andrzej
2015-09-01
The article presents a comparison of the Intra prediction algorithms in the current state-of-the-art video coding standards, including MJPEG 2000, VP8, VP9, H.264/AVC and H.265/HEVC. The effectiveness of techniques employed by each standard is evaluated in terms of compression efficiency and average encoding time. The compression efficiency is measured using BD-PSNR and BD-RATE metrics with H.265/HEVC results as an anchor. Tests are performed on a set of video sequences, composed of sequences gathered by Joint Collaborative Team on Video Coding during the development of the H.265/HEVC standard and 4K sequences provided by Ultra Video Group. According to results, H.265/HEVC provides significant bit-rate savings at the expense of computational complexity, while VP9 may be regarded as a compromise between the efficiency and required encoding time.
Efficient FFT Algorithm for Psychoacoustic Model of the MPEG-4 AAC
NASA Astrophysics Data System (ADS)
Lee, Jae-Seong; Lee, Chang-Joon; Park, Young-Cheol; Youn, Dae-Hee
This paper proposes an efficient FFT algorithm for the Psycho-Acoustic Model (PAM) of MPEG-4 AAC. The proposed algorithm synthesizes FFT coefficients using MDCT and MDST coefficients through circular convolution. The complexity of the MDCT and MDST coefficients is approximately half of the original FFT. We also design a new PAM based on the proposed FFT algorithm, which has 15% lower computational complexity than the original PAM without degradation of sound quality. Subjective as well as objective test results are presented to confirm the efficiency of the proposed FFT computation algorithm and the PAM.
MPEG-4 solutions for virtualizing RDP-based applications
NASA Astrophysics Data System (ADS)
Joveski, Bojan; Mitrea, Mihai; Ganji, Rama-Rao
2014-02-01
The present paper provides the proof-of-concepts for the use of the MPEG-4 multimedia scene representations (BiFS and LASeR) as a virtualization tool for RDP-based applications (e.g. MS Windows applications). Two main applicative benefits are thus granted. First, any legacy application can be virtualized without additional programming effort. Second, heterogeneous mobile devices (different manufacturers, OS) can collaboratively enjoy full multimedia experiences. From the methodological point of view, the main novelty consists in (1) designing an architecture allowing the conversion of the RDP content into a semantic multimedia scene-graph and its subsequent rendering on the client and (2) providing the underlying scene graph management and interactivity tools. Experiments consider 5 users and two RDP applications (MS Word and Internet Explorer), and benchmark our solution against two state-of-the-art technologies (VNC and FreeRDP). The visual quality is evaluated by six objective measures (e.g. PSNR<37dB, SSIM<0.99). The network traffic evaluation shows that: (1) for text editing, the MPEG-based solutions outperforms the VNC by a factor 1.8 while being 2 times heavier then the FreeRDP; (2) for Internet browsing, the MPEG solutions outperform both VNC and FreeRDP by factors of 1.9 and 1.5, respectively. The average round-trip times (less than 40ms) cope with real-time application constraints.
Guo, Qingfa; Kuang, Lei; Cao, Hui; Li, Weizhong; Wei, Jing
2015-12-01
In this paper, a novel bifunctional nanoprobe based on polyethylene glycol(MPEG)-poly(ϵ-caprolactone)(ϵ-CL)-polyethylenimine(PEI) labeled with FITC (MPEG-PCL-PEI-FITC, PCIF) were prepared to provide tumor therapy and simultaneous diagnostic information via magnetic resonance imaging (MRI) and optical imaging. Superparamagnetic iron oxide (SPIO) and doxorubicin (DOX) loaded PCIF (PCIF/SPIO/DOX) nanoprobes were prepared by self-assembling into micelles, which had uniformly distributed particle size of 130 ± 5 nm and a zeta potential of +35 ± 2 mV. Transmission electronic microscopy(TEM) showed that SPIO NPs were loaded into PCIF micelles. The PCIF/SPIO/DOX nanoprobes were superparamagnetic at 300 K with saturated magnetization of 20.5 emu/g Fe by vibrating-sample-magnetomete (VSM). Studies on cellular uptake of PCIF/SPIO/DOX nanoprobes demonstrated that SPIO NPs, DOX and FITC labeled MPEG-PCL-PEI were simultaneously taken up by the breast cancer (4T1) cells. After intravenous injection of PCIF/SPIO/DOX nanoprobes in 4T1 tumor-bearing mice, SPIO NPs, DOX and FITC labeled MPEG-PCL-PEI micelles were simultaneously delivered into tumor tissue by histochemisty. This work is important for the applications to multimodal diagnostic and theragnosis as nanomedicine. Copyright © 2015 Elsevier B.V. All rights reserved.
Robust audio-visual speech recognition under noisy audio-video conditions.
Stewart, Darryl; Seymour, Rowan; Pass, Adrian; Ming, Ji
2014-02-01
This paper presents the maximum weighted stream posterior (MWSP) model as a robust and efficient stream integration method for audio-visual speech recognition in environments, where the audio or video streams may be subjected to unknown and time-varying corruption. A significant advantage of MWSP is that it does not require any specific measurements of the signal in either stream to calculate appropriate stream weights during recognition, and as such it is modality-independent. This also means that MWSP complements and can be used alongside many of the other approaches that have been proposed in the literature for this problem. For evaluation we used the large XM2VTS database for speaker-independent audio-visual speech recognition. The extensive tests include both clean and corrupted utterances with corruption added in either/both the video and audio streams using a variety of types (e.g., MPEG-4 video compression) and levels of noise. The experiments show that this approach gives excellent performance in comparison to another well-known dynamic stream weighting approach and also compared to any fixed-weighted integration approach in both clean conditions or when noise is added to either stream. Furthermore, our experiments show that the MWSP approach dynamically selects suitable integration weights on a frame-by-frame basis according to the level of noise in the streams and also according to the naturally fluctuating relative reliability of the modalities even in clean conditions. The MWSP approach is shown to maintain robust recognition performance in all tested conditions, while requiring no prior knowledge about the type or level of noise.
Home media server content management
NASA Astrophysics Data System (ADS)
Tokmakoff, Andrew A.; van Vliet, Harry
2001-07-01
With the advent of set-top boxes, the convergence of TV (broadcasting) and PC (Internet) is set to enter the home environment. Currently, a great deal of activity is occurring in developing standards (TV-Anytime Forum) and devices (TiVo) for local storage on Home Media Servers (HMS). These devices lie at the heart of convergence of the triad: communications/networks - content/media - computing/software. Besides massive storage capacity and being a communications 'gateway', the home media server is characterised by the ability to handle metadata and software that provides an easy to use on-screen interface and intelligent search/content handling facilities. In this paper, we describe a research prototype HMS that is being developed within the GigaCE project at the Telematica Instituut . Our prototype demonstrates advanced search and retrieval (video browsing), adaptive user profiling and an innovative 3D component of the Electronic Program Guide (EPG) which represents online presence. We discuss the use of MPEG-7 for representing metadata, the use of MPEG-21 working draft standards for content identification, description and rights expression, and the use of HMS peer-to-peer content distribution approaches. Finally, we outline explorative user behaviour experiments that aim to investigate the effectiveness of the prototype HMS during development.
NASA Astrophysics Data System (ADS)
Ji, Fang; Xu, Min; Wang, Baorui; Wang, Chao; Li, Xiaoyuan; Zhang, Yunfei; Zhou, Ming; Huang, Wen; Wei, Qilong; Tang, Guangping; He, Jianguo
2015-10-01
KDP is a common type of optics that is extremely difficult to polish by the conventional route. MRF is a local polishing technology based on material removal via shearing with minimal normal load and sub-surface damage. In contrast to traditional emendation on an abrasive, the MPEG soft coating is designed and prepared to modify the CIP surface to achieve a hardness matched with that of KDP because CIP inevitably takes part in the material removal during finishing. Morphology and infrared spectra are explored to prove the existence of homogeneous coating, and the improvement of MPEG for the polishing quality is validated by the analysis of roughness, turning grooves, and stress. The synthesized MPEG-coated CIP (MPEG-CIP) is chemically and physically compatible with KDP, which can be removed after cleaning. Our research exhibits the promising prospects of MPEG-CIP in KDP MRF.
Generating and Describing Affective Eye Behaviors
NASA Astrophysics Data System (ADS)
Mao, Xia; Li, Zheng
The manner of a person's eye movement conveys much about nonverbal information and emotional intent beyond speech. This paper describes work on expressing emotion through eye behaviors in virtual agents based on the parameters selected from the AU-Coded facial expression database and real-time eye movement data (pupil size, blink rate and saccade). A rule-based approach to generate primary (joyful, sad, angry, afraid, disgusted and surprise) and intermediate emotions (emotions that can be represented as the mixture of two primary emotions) utilized the MPEG4 FAPs (facial animation parameters) is introduced. Meanwhile, based on our research, a scripting tool, named EEMML (Emotional Eye Movement Markup Language) that enables authors to describe and generate emotional eye movement of virtual agents, is proposed.
Biodegradable micelles enhance the antiglioma activity of curcumin in vitro and in vivo
Zheng, Songping; Gao, Xiang; Liu, Xiaoxiao; Yu, Ting; Zheng, Tianying; Wang, Yi; You, Chao
2016-01-01
Curcumin (Cur), a natural polyphenol of Curcuma longa, has been recently reported to possess antitumor activities. However, due to its poor aqueous solubility and low biological availability, the clinical application of Cur is quite limited. The encapsulation of hydrophobic drugs into nanoparticles is an effective way to improve their pharmaceutical activities. In this research, nanomicelles loaded with Cur were formulated by a self-assembly method with biodegradable monomethoxy poly(ethylene glycol)-poly(lactide) copolymers (MPEG-PLAs). After encapsulation, the cellular uptake was increased and Cur could be released from MPEG-PLA micelles in a sustained manner. The Cur-loaded MPEG-PLA micelles (Cur/MPEG-PLA micelles) exhibited an enhanced toxicity on C6 and U251 glioma cells and induced more apoptosis on C6 glioma cells compared with free Cur. Moreover, the therapy efficiency of Cur/MPEG-PLA micelles was evaluated at length on a nude mouse model bearing glioma. The Cur/MPEG-PLA micelles were more effective on suppressing tumor growth compared with free Cur, which indicated that Cur/MPEG-PLA micelles improved the antiglioma activity of Cur in vivo. The results of immunohistochemical and immunofluorescent analysis indicated that the induction of apoptosis, antiangiogenesis, and inhibition of cell proliferation may contribute to the improvement in antiglioma effects. Our data suggested that Cur/MPEG-PLA may have potential clinic applications in glioma therapy. PMID:27354801
Bathige, S D N K; Umasuthan, Navaneethaiyer; Whang, Ilson; Lim, Bong-Soo; Won, Seung Hwan; Lee, Jehee
2014-08-01
The membrane-attack complex/perforin (MACPF) domain-containing proteins play an important role in the innate immune response against invading microbial pathogens. In the current study, a member of the MACPF domain-containing proteins, macrophage expressed gene-1 (MPEG1) encoding 730 amino acids with the theoretical molecular mass of 79.6 kDa and an isoelectric point (pI) of 6.49 was characterized from disk abalone Haliotis discus discus (AbMPEG1). We found that the characteristic MACPF domain (Val(131)-Tyr(348)) and transmembrane segment (Ala(669)-Ile(691)) of AbMPEG1 are located in the N- and C-terminal ends of the protein, respectively. Ortholog comparison revealed that AbMPEG1 has the highest sequence identity with its pink abalone counterpart, while sequences identities of greater than 90% were observed with MPEG1 members from other abalone species. Likewise, the furin cleavage site KRRRK was highly conserved in all abalone species, but not in other species investigated. We identified an intron-less genomic sequence within disk abalone AbMPEG1, which was similar to other mammalian, avian, and reptilian counterparts. Transcription factor binding sites, which are important for immune responses, were identified in the 5'-flanking region of AbMPEG1. qPCR revealed AbMPEG1 transcripts are present in every tissues examined, with the highest expression level occurring in mantle tissue. Significant up-regulation of AbMPEG1 transcript levels was observed in hemocytes and gill tissues following challenges with pathogens (Vibrio parahemolyticus, Listeria monocytogenes and viral hemorrhagic septicemia virus) as well as pathogen-associated molecular patterns (PAMPs: lipopolysaccharides and poly I:C immunostimulant). Finally, the antibacterial activity of the MACPF domain was characterized against Gram-negative and -positive bacteria using a recombinant peptide. Taken together, these results indicate that the biological significance of the AbMPEG1 gene includes a role in protecting disk abalone through the ability of AbMPEG1 to initiate an innate immune response upon pathogen invasion. Copyright © 2014 Elsevier Ltd. All rights reserved.
Efficient stereoscopic contents file format on the basis of ISO base media file format
NASA Astrophysics Data System (ADS)
Kim, Kyuheon; Lee, Jangwon; Suh, Doug Young; Park, Gwang Hoon
2009-02-01
A lot of 3D contents haven been widely used for multimedia services, however, real 3D video contents have been adopted for a limited applications such as a specially designed 3D cinema. This is because of the difficulty of capturing real 3D video contents and the limitation of display devices available in a market. However, diverse types of display devices for stereoscopic video contents for real 3D video contents have been recently released in a market. Especially, a mobile phone with a stereoscopic camera has been released in a market, which provides a user as a consumer to have more realistic experiences without glasses, and also, as a content creator to take stereoscopic images or record the stereoscopic video contents. However, a user can only store and display these acquired stereoscopic contents with his/her own devices due to the non-existence of a common file format for these contents. This limitation causes a user not share his/her contents with any other users, which makes it difficult the relevant market to stereoscopic contents is getting expanded. Therefore, this paper proposes the common file format on the basis of ISO base media file format for stereoscopic contents, which enables users to store and exchange pure stereoscopic contents. This technology is also currently under development for an international standard of MPEG as being called as a stereoscopic video application format.
Research on compression performance of ultrahigh-definition videos
NASA Astrophysics Data System (ADS)
Li, Xiangqun; He, Xiaohai; Qing, Linbo; Tao, Qingchuan; Wu, Di
2017-11-01
With the popularization of high-definition (HD) images and videos (1920×1080 pixels and above), there are even 4K (3840×2160) television signals and 8 K (8192×4320) ultrahigh-definition videos. The demand for HD images and videos is increasing continuously, along with the increasing data volume. The storage and transmission cannot be properly solved only by virtue of the expansion capacity of hard disks and the update and improvement of transmission devices. Based on the full use of the coding standard high-efficiency video coding (HEVC), super-resolution reconstruction technology, and the correlation between the intra- and the interprediction, we first put forward a "division-compensation"-based strategy to further improve the compression performance of a single image and frame I. Then, by making use of the above thought and HEVC encoder and decoder, a video compression coding frame is designed. HEVC is used inside the frame. Last, with the super-resolution reconstruction technology, the reconstructed video quality is further improved. The experiment shows that by the proposed compression method for a single image (frame I) and video sequence here, the performance is superior to that of HEVC in a low bit rate environment.
Remote Visualization and Remote Collaboration On Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Watson, Val; Lasinski, T. A. (Technical Monitor)
1995-01-01
A new technology has been developed for remote visualization that provides remote, 3D, high resolution, dynamic, interactive viewing of scientific data (such as fluid dynamics simulations or measurements). Based on this technology, some World Wide Web sites on the Internet are providing fluid dynamics data for educational or testing purposes. This technology is also being used for remote collaboration in joint university, industry, and NASA projects in computational fluid dynamics and wind tunnel testing. Previously, remote visualization of dynamic data was done using video format (transmitting pixel information) such as video conferencing or MPEG movies on the Internet. The concept for this new technology is to send the raw data (e.g., grids, vectors, and scalars) along with viewing scripts over the Internet and have the pixels generated by a visualization tool running on the viewer's local workstation. The visualization tool that is currently used is FAST (Flow Analysis Software Toolkit).
Bubbles Responding to Ultrasound Pressure
NASA Technical Reports Server (NTRS)
2003-01-01
The Bubble and Drop Nonlinear Dynamics (BDND) experiment was designed to improve understanding of how the shape and behavior of bubbles respond to ultrasound pressure. By understanding this behavior, it may be possible to counteract complications bubbles cause during materials processing on the ground. This 12-second sequence came from video downlinked from STS-94, July 5 1997, MET:3/19:15 (approximate). The BDND guest investigator was Gary Leal of the University of California, Santa Barbara. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1R mission (STS-94, July 1-17 1997). Advanced fluid dynamics experiments will be a part of investigations plarned for the International Space Station. (435KB, 13-second MPEG, screen 160 x 120 pixels; downlinked video, higher quality not available) A still JPG composite of this movie is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300162.html.
Trimarchi, Matteo; Lund, Valerie J; Nicolai, Piero; Pini, Massimiliano; Senna, Massimo; Howard, David J
2004-04-01
The Neoplasms of the Sinonasal Tract software package (NSNT v 1.0) implements a complete visual database for patients with sinonasal neoplasia, facilitating standardization of data and statistical analysis. The software, which is compatible with the Macintosh and Windows platforms, provides multiuser application with a dedicated server (on Windows NT or 2000 or Macintosh OS 9 or X and a network of clients) together with web access, if required. The system hardware consists of an Apple Power Macintosh G4500 MHz computer with PCI bus, 256 Mb of RAM plus 60 Gb hard disk, or any IBM-compatible computer with a Pentium 2 processor. Image acquisition may be performed with different frame-grabber cards for analog or digital video input of different standards (PAL, SECAM, or NTSC) and levels of quality (VHS, S-VHS, Betacam, Mini DV, DV). The visual database is based on 4th Dimension by 4D Inc, and video compression is made in real-time MPEG format. Six sections have been developed: demographics, symptoms, extent of disease, radiology, treatment, and follow-up. Acquisition of data includes computed tomography and magnetic resonance imaging, histology, and endoscopy images, allowing sequential comparison. Statistical analysis integral to the program provides Kaplan-Meier survival curves. The development of a dedicated, user-friendly database for sinonasal neoplasia facilitates a multicenter network and has obvious clinical and research benefits.
Perceptually-Based Adaptive JPEG Coding
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)
1996-01-01
An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teuton, Jeremy R.; Griswold, Richard L.; Mehdi, Beata L.
Precise analysis of both (S)TEM images and video are time and labor intensive processes. As an example, determining when crystal growth and shrinkage occurs during the dynamic process of Li dendrite deposition and stripping involves manually scanning through each frame in the video to extract a specific set of frames/images. For large numbers of images, this process can be very time consuming, so a fast and accurate automated method is desirable. Given this need, we developed software that uses analysis of video compression statistics for detecting and characterizing events in large data sets. This software works by converting the datamore » into a series of images which it compresses into an MPEG-2 video using the open source “avconv” utility [1]. The software does not use the video itself, but rather analyzes the video statistics from the first pass of the video encoding that avconv records in the log file. This file contains statistics for each frame of the video including the frame quality, intra-texture and predicted texture bits, forward and backward motion vector resolution, among others. In all, avconv records 15 statistics for each frame. By combining different statistics, we have been able to detect events in various types of data. We have developed an interactive tool for exploring the data and the statistics that aids the analyst in selecting useful statistics for each analysis. Going forward, an algorithm for detecting and possibly describing events automatically can be written based on statistic(s) for each data type.« less
2015-01-01
As the number of diagnostic and therapeutic applications utilizing gold nanoparticles (AuNPs) increases, so does the need for AuNPs that are stable in vivo, biocompatible, and suitable for bioconjugation. We investigated a strategy for AuNP stabilization that uses methoxypolyethylene glycol-graft-poly(l-lysine) copolymer (MPEG-gPLL) bearing free amino groups as a stabilizing molecule. MPEG-gPLL injected into water solutions of HAuCl4 with or without trisodium citrate resulted in spherical (Zav = 36 nm), monodisperse (PDI = 0.27), weakly positively charged nanoparticles (AuNP3) with electron-dense cores (diameter: 10.4 ± 2.5 nm) and surface amino groups that were amenable to covalent modification. The AuNP3 were stable against aggregation in the presence of phosphate and serum proteins and remained dispersed after their uptake into endosomes. MPEG-gPLL-stabilized AuNP3 exhibited high uptake and very low toxicity in human endothelial cells, but showed a high dose-dependent toxicity in epithelioid cancer cells. Highly stable radioactive labeling of AuNP3 with 99mTc allowed imaging of AuNP3 biodistribution and revealed dose-dependent long circulation in the blood. The minor fraction of AuGNP3 was found in major organs and at sites of experimentally induced inflammation. Gold analysis showed evidence of a partial degradation of the MPEG-gPLL layer in AuNP3 particles accumulated in major organs. Radiofrequency-mediated heating of AuNP3 solutions showed that AuNP3 exhibited heating behavior consistent with 10 nm core nanoparticles. We conclude that PEG-pPLL coating of AuNPs confers “stealth” properties that enable these particles to exist in vivo in a nonaggregating, biocompatible state making them suitable for potential use in biomedical applications such as noninvasive radiofrequency cancer therapy. PMID:25496453
Wen, Ran; Zhang, Qing; Xu, Pan; Bai, Jie; Li, Pengyue; Du, Shouying; Lu, Yang
2016-01-01
Xingnaojing microemulsion (XNJ-M) administered intranasally is used for stroke treatment. In order to decrease the XNJ-M-induced mucosal irritation, XNJ-M modified by mPEG2000-PLA (XNJ-MM) were prepared in a previous work. The present work aimed to assess the impact of mPEG2000-PLA on pharmacokinetic features and brain-targeting ability of XNJ-M. The bioavailability and brain-target effects of borneol and geniposide in XNJ-M and XNJ-MM were compared in mice after intravenous (i.v.) and intranasal (i.n.) administrations. Gas chromatography, high-performance liquid chromatography, and ultra-performance liquid chromatography/tandem mass spectrometry methods were developed for the quantification of borneol and geniposide. Blood and brain samples were collected from mice at different time points after i.v. and i.n. treatments with borneol at 8.0 mg/kg, geniposide at 4.12 mg/kg. In addition, near-infrared fluorescence dye, 1,1'-dioctadecyl-3,3,3',3'-tetramethyl indotricarbocyanine iodide was loaded into microemulsions to evaluate the brain-targeting ability of XNJ-M and XNJ-MM by near-infrared fluorescence imaging in vivo and ex vivo. For XNJ-M and XNJ-MM, the relative brain targeted coefficients (Re) were 134.59% and 198.09% (borneol), 89.70% and 188.33% (geniposide), respectively. Besides, significant near-infrared fluorescent signal was detected in the brain after i.n. administration of microemulsions, compared with that of groups for i.v. administration. These findings indicated that mPEG2000-PLA modified microemulsion improved drug entry into blood and brain compared with normal microemulsion: the introduction of mPEG2000-PLA in microemulsion resulted in brain-targeting enhancement of both fat-soluble and water-soluble drugs. These findings provide a basis for the significance of mPEG2000-PLA addition in microemulsion, defining its effects on the drugs in microemulsion.
Zhang, Chun; Fan, Kai; Luo, Hua; Ma, Xuefeng; Liu, Riyong; Yang, Li; Hu, Chunlan; Chen, Zhenmin; Min, Zhiqiang; Wei, Dongzhi
2012-07-01
PEGylated uricase is a promising anti-gout drug, but the only commercially marketed 10kDa mPEG modified porcine-like uricase (Pegloticase) can only be used for intravenous infusion. In this study, tetrameric canine uricase variant was modified by covalent conjugation of all accessible ɛ amino sites of lysine residues with a smaller 5kDa mPEG (mPEG-UHC). The average modification degree and PEGylation homogeneity were evaluated. Approximately 9.4 5 kDa mPEG chains were coupled to each monomeric uricase and the main conjugates contained 7-11 mPEG chains per subunit. mPEG-UHC showed significantly therapeutic or preventive effect on uric acid nephropathy and acute urate arthritis based on three different animal models. The clearance rate from an intravenous injection of mPEG-UHC varied significantly between species, at 2.61 mL/h/kg for rats and 0.21 mL/h/kg for monkeys. The long elimination half-life of mPEG-UHC in non-human primate (191.48 h, intravenous injection) indicated the long-term effects in humans. Moreover, the acceptable bioavailability of mPEG-UHC after subcutaneous administration in monkeys (94.21%) suggested that subcutaneous injection may be regarded as a candidate administration route in clinical trails. Non-specific tissue distribution was observed after administration of (125)I-labeled mPEG-UHC in rats, and elimination by the kidneys into the urine is the primary excretion route. Copyright © 2012 Elsevier B.V. All rights reserved.
MPEG-21 in broadcasting: the novel digital broadcast item model
NASA Astrophysics Data System (ADS)
Lugmayr, Artur R.; Touimi, Abdellatif B.; Kaneko, Itaru; Kim, Jong-Nam; Alberti, Claudio; Yona, Sadigurschi; Kim, Jaejoon; Andrade, Maria Teresa; Kalli, Seppo
2004-05-01
The MPEG experts are currently developing the MPEG-21 set of standards and this includes a framework and specifications for digital rights management (DRM), delivery of quality of services (QoS) over heterogeneous networks and terminals, packaging of multimedia content and other things essential for the infrastructural aspects of multimedia content distribution. Considerable research effort is being applied to these new developments and the capabilities of MPEG-21 technologies to address specific application areas are being investigated. One such application area is broadcasting, in particular the development of digital TV and its services. In more practical terms, digital TV addresses networking, events, channels, services, programs, signaling, encoding, bandwidth, conditional access, subscription, advertisements and interactivity. MPEG-21 provides an excellent framework of standards to be applied in digital TV applications. Within the scope of this research work we describe a new model based on MPEG-21 and its relevance to digital TV: the digital broadcast item model (DBIM). The goal of the DBIM is to elaborate the potential of MPEG-21 for digital TV applications. Within this paper we focus on a general description of the DBIM, quality of service (QoS) management and metadata filtering, digital rights management and also present use-cases and scenarios where the DBIM"s role is explored in detail.
Role of the Methoxy Group in Immune Responses to mPEG-Protein Conjugates
2012-01-01
Anti-PEG antibodies have been reported to mediate the accelerated clearance of PEG-conjugated proteins and liposomes, all of which contain methoxyPEG (mPEG). The goal of this research was to assess the role of the methoxy group in the immune responses to mPEG conjugates and the potential advantages of replacing mPEG with hydroxyPEG (HO-PEG). Rabbits were immunized with mPEG, HO-PEG, or t-butoxyPEG (t-BuO-PEG) conjugates of human serum albumin, human interferon-α, or porcine uricase as adjuvant emulsions. Assay plates for enzyme-linked immunosorbent assays (ELISAs) were coated with mPEG, HO-PEG, or t-BuO-PEG conjugates of the non-cross-reacting protein, porcine superoxide dismutase (SOD). In sera from rabbits immunized with HO-PEG conjugates of interferon-α or uricase, the ratio of titers of anti-PEG antibodies detected on mPEG-SOD over HO-PEG-SOD (“relative titer”) had a median of 1.1 (range 0.9–1.5). In contrast, sera from rabbits immunized with mPEG conjugates of three proteins had relative titers with a median of 3.0 (range 1.1–20). Analyses of sera from rabbits immunized with t-BuO-PEG-albumin showed that t-butoxy groups are more immunogenic than methoxy groups. Adding Tween 20 or Tween 80 to buffers used to wash the assay plates, as is often done in ELISAs, greatly reduced the sensitivity of detection of anti-PEG antibodies. Competitive ELISAs revealed that the affinities of antibodies raised against mPEG-uricase were c. 70 times higher for 10 kDa mPEG than for 10 kDa PEG diol and that anti-PEG antibodies raised against mPEG conjugates of three proteins had >1000 times higher affinities for albumin conjugates with c. 20 mPEGs than for analogous HO-PEG-albumin conjugates. Overall, these results are consistent with the hypothesis that antibodies with high affinity for methoxy groups contribute to the loss of efficacy of mPEG conjugates, especially if multiply-PEGylated. Using monofunctionally activated HO-PEG instead of mPEG in preparing conjugates for clinical use might decrease this undesirable effect. PMID:22332808
Novel polyethylene glycol derivative suitable for the preparation of mono-PEGylated protein.
Yun, Qiang; Chen, Ting; Zhang, Guifeng; Bi, Jingxiu; Ma, Guanghui; Su, Zhiguo
2005-02-01
A novel methoxypolyethylene glycol (mPEG) derivative, containing a reactive group of 1-methyl pyridinium toluene-4-sulfonate, was synthesized and characterized. The mPEG derivative was successfully conjugated with two proteins: recombinant human granulocyte-colony stimulating factor (rhG-CSF) and consensus interferon (C-IFN). Homogeneous mono-PEGylated proteins were obtained which were identified by high performance size-exclusion chromatography and MALDI-TOF mass spectrometry. The biological activities of the mono-PEGylated rhG-CSF and the mono-PEGylated C-IFN were maintained at 90% and 88%, respectively.
NOAA GOES Geostationary Satellite Server
Size West CONUS IR Image MPEG | Loop Visible Full Size West CONUS VIS Image MPEG | Loop Water Vapor Full Size West Conus WV Image MPEG | Loop Alaska Infrared Full Size Alaska IR Image Loop | Color Infrared Full Size Hawaii IR Image Loop | Color Visible Full Size Hawaii VIS Image Loop Water Vapor Full
DCT based interpolation filter for motion compensation in HEVC
NASA Astrophysics Data System (ADS)
Alshin, Alexander; Alshina, Elena; Park, Jeong Hoon; Han, Woo-Jin
2012-10-01
High Efficiency Video Coding (HEVC) draft standard has a challenging goal to improve coding efficiency twice compare to H.264/AVC. Many aspects of the traditional hybrid coding framework were improved during new standard development. Motion compensated prediction, in particular the interpolation filter, is one area that was improved significantly over H.264/AVC. This paper presents the details of the interpolation filter design of the draft HEVC standard. The coding efficiency improvements over H.264/AVC interpolation filter is studied and experimental results are presented, which show a 4.0% average bitrate reduction for Luma component and 11.3% average bitrate reduction for Chroma component. The coding efficiency gains are significant for some video sequences and can reach up 21.7%.
Zheng, Lan; Gou, Maling; Zhou, Shengtao; Yi, Tao; Zhong, Qian; Li, Zhengyu; He, Xiang; Chen, Xiancheng; Zhou, Lina; Wei, Yuquan; Qian, Zhiyong; Zhao, Xia
2011-06-01
Doxorubicin (Dox) is one of the most commonly used and highly effective antineoplastic agents, but the clinical application of this broad spectrum drug is largely hampered by its poor stability and serious toxicity to normal tissues. Hence, it is essential to improve the therapeutic effect and decrease the systematic toxicity for the administration of doxorubicin. In our study, doxorubicin was incorporated into monomethoxy poly(ethylene glycol)-poly(ε-caprolactone) (MPEG-PCL) micelle by a self-assembly method. The cytotoxicity and cellular uptake efficiency of Dox-loaded MPEG-PCL (Dox/MPEG-PCL) micelle against B16-F10 murine melanoma cells was examined by the methylthiazoltetrazolium (MTT) test and flow cytometry. The antitumor activity of Dox/MPEG-PCL was evaluated in C57BL/6 mice injected subcutaneously with B16-F10 cells. Toxicity was evaluated in tumor-free mice. Meanwhile, tumor proliferation, intratumoal angiogenesis and apoptotic cells were evaluated by PCNA, CD31 staining and TUNEL assay, respectively. Encapsulation of doxorubicin in MPEG-PCL micelle improved the cytotoxicity of doxorubicin and enhanced its cellular uptake on B16-F10 cell in vitro. Administration of Dox/MPEG-PCL micelle resulted in significant inhibition (75% maximum inhibition relative to controls) in the growth of B16-F10 tumor xenografts and prolonged the survival of the treated mice (P<0.05). These anti-tumor responses were associated with marked increase of tumor apoptosis and notable reduction of cell proliferation and intratumoral microvessel density (P<0.05). The system toxicity also decreased in the Dox/MPEG-PCL group compared with free doxorubicin group. Our data indicate that the encapsulation of doxorubicin in MPEG-PCL micelle improved the anti-tumor activity in vivo without conspicuous systemic toxic effects.
Switalla, S; Lauenstein, L; Prenzler, F; Knothe, S; Förster, C; Fieguth, H-G; Pfennig, O; Schaumann, F; Martin, C; Guzman, C A; Ebensen, T; Müller, M; Hohlfeld, J M; Krug, N; Braun, A; Sewald, K
2010-08-01
Prediction of lung innate immune responses is critical for developing new drugs. Well-established immune modulators like lipopolysaccharides (LPS) can elicit a wide range of immunological effects. They are involved in acute lung diseases such as infections or chronic airway diseases such as COPD. LPS has a strong adjuvant activity, but its pyrogenicity has precluded therapeutic use. The bacterial lipopeptide MALP-2 and its synthetic derivative BPPcysMPEG are better tolerated. We have compared the effects of LPS and BPPcysMPEG on the innate immune response in human precision-cut lung slices. Cytokine responses were quantified by ELISA, Luminex, and Meso Scale Discovery technology. The initial response to LPS and BPPcysMPEG was marked by coordinated and significant release of the mediators IL-1β, MIP-1β, and IL-10 in viable PCLS. Stimulation of lung tissue with BPPcysMPEG, however, induced a differential response. While LPS upregulated IFN-γ, BPPcysMPEG did not. This traces back to their signaling pathways via TLR4 and TLR2/6. The calculated exposure doses selected for LPS covered ranges occurring in clinical studies with human beings. Correlation of obtained data with data from human BAL fluid after segmental provocation with endotoxin showed highly comparable effects, resulting in a coefficient of correlation >0.9. Furthermore, we were interested in modulating the response to LPS. Using dexamethasone as an immunosuppressive drug for anti-inflammatory therapy, we found a significant reduction of GM-CSF, IL-1β, and IFN-γ. The PCLS-model offers the unique opportunity to test the efficacy and toxicity of biological agents intended for use by inhalation in a complex setting in humans. Copyright © 2010 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Switalla, S.; Lauenstein, L.; Prenzler, F.
Prediction of lung innate immune responses is critical for developing new drugs. Well-established immune modulators like lipopolysaccharides (LPS) can elicit a wide range of immunological effects. They are involved in acute lung diseases such as infections or chronic airway diseases such as COPD. LPS has a strong adjuvant activity, but its pyrogenicity has precluded therapeutic use. The bacterial lipopeptide MALP-2 and its synthetic derivative BPPcysMPEG are better tolerated. We have compared the effects of LPS and BPPcysMPEG on the innate immune response in human precision-cut lung slices. Cytokine responses were quantified by ELISA, Luminex, and Meso Scale Discovery technology. Themore » initial response to LPS and BPPcysMPEG was marked by coordinated and significant release of the mediators IL-1{beta}, MIP-1{beta}, and IL-10 in viable PCLS. Stimulation of lung tissue with BPPcysMPEG, however, induced a differential response. While LPS upregulated IFN-{gamma}, BPPcysMPEG did not. This traces back to their signaling pathways via TLR4 and TLR2/6. The calculated exposure doses selected for LPS covered ranges occurring in clinical studies with human beings. Correlation of obtained data with data from human BAL fluid after segmental provocation with endotoxin showed highly comparable effects, resulting in a coefficient of correlation > 0.9. Furthermore, we were interested in modulating the response to LPS. Using dexamethasone as an immunosuppressive drug for anti-inflammatory therapy, we found a significant reduction of GM-CSF, IL-1{beta}, and IFN-{gamma}. The PCLS-model offers the unique opportunity to test the efficacy and toxicity of biological agents intended for use by inhalation in a complex setting in humans.« less
An efficient interpolation filter VLSI architecture for HEVC standard
NASA Astrophysics Data System (ADS)
Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang
2015-12-01
The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.
NASA Astrophysics Data System (ADS)
Li, Hao; Niu, Dong-Hui; Zhou, Hui; Chao, Chun-Ying; Wu, Li-Jun; Han, Pei-Lin
2018-05-01
Hydroxyl-terminated polybutadiene grafted methoxyl polyethylene glycol (HTPB-g-MPEG) with different arm length were synthesized by grafting methoxyl poly(ethylene glycol)s (MPEGs, Mn = 350, 750, 1900 and 5000, respectively) to the hydroxyl-terminated polybutadiene (HTPB) molecule using isophorone diisocyanate (IPDI) as the coupling agent, and blended with PVDF to fabricate porous separators via phase inversion process. By measuring the composition, morphology and ion conductivity etc., the influence of HTPB-g-MPEG on structure and property of blend separators were discussed. Compared with pure PVDF separator with comparable porous structure, the adoption of HTPB-g-MPEG could not only decrease the crystallinity, but also enhance the stability of entrapped liquid electrolyte and corresponding ion conductivity. The cells assembled with such separators showed good initial discharge capacity and cyclic stability.
Simulation and Real-Time Verification of Video Algorithms on the TI C6400 Using Simulink
2004-08-20
SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12 . DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release...plot estimates over time (scrolling data) Adjust detection threshold (click mouse on graph) Monitor video capture Input video frames Captured frames 12 ...Video App: Surveillance Recording 1 2 7 3 4 9 5 6 11 SL for video Explanation of GUI 12 Target Options8 Build Process 10 13 14 15 16 M-code snippet
Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei
2009-03-01
Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.
Unified transform architecture for AVC, AVS, VC-1 and HEVC high-performance codecs
NASA Astrophysics Data System (ADS)
Dias, Tiago; Roma, Nuno; Sousa, Leonel
2014-12-01
A unified architecture for fast and efficient computation of the set of two-dimensional (2-D) transforms adopted by the most recent state-of-the-art digital video standards is presented in this paper. Contrasting to other designs with similar functionality, the presented architecture is supported on a scalable, modular and completely configurable processing structure. This flexible structure not only allows to easily reconfigure the architecture to support different transform kernels, but it also permits its resizing to efficiently support transforms of different orders (e.g. order-4, order-8, order-16 and order-32). Consequently, not only is it highly suitable to realize high-performance multi-standard transform cores, but it also offers highly efficient implementations of specialized processing structures addressing only a reduced subset of transforms that are used by a specific video standard. The experimental results that were obtained by prototyping several configurations of this processing structure in a Xilinx Virtex-7 FPGA show the superior performance and hardware efficiency levels provided by the proposed unified architecture for the implementation of transform cores for the Advanced Video Coding (AVC), Audio Video coding Standard (AVS), VC-1 and High Efficiency Video Coding (HEVC) standards. In addition, such results also demonstrate the ability of this processing structure to realize multi-standard transform cores supporting all the standards mentioned above and that are capable of processing the 8k Ultra High Definition Television (UHDTV) video format (7,680 × 4,320 at 30 fps) in real time.
With the VLT Interferometer towards Sharper Vision
NASA Astrophysics Data System (ADS)
2000-05-01
The Nova-ESO VLTI Expertise Centre Opens in Leiden (The Netherlands) European science and technology will gain further strength when the new, front-line Nova-ESO VLTI Expertise Centre (NEVEC) opens in Leiden (The Netherlands) this week. It is a joint venture of the Netherlands Research School for Astronomy (NOVA) (itself a collaboration between the Universities of Amsterdam, Groningen, Leiden, and Utrecht) and the European Southern Observatory (ESO). It is concerned with the Very Large Telescope Interferometer (VLTI). The Inauguration of the new Centre will take place on Friday, May 26, 2000, at the Gorlaeus Laboratory (Lecture Hall no. 1), Einsteinweg 55 2333 CC Leiden; the programme is available on the web. Media representatives who would like to participate in this event and who want further details should contact the Nova Information Centre (e-mail: jacques@astro.uva.nl; Tel: +31-20-5257480 or +31-6-246 525 46). The inaugural ceremony is preceded by a scientific workshop on ground and space-based optical interferometry. NEVEC: A Technology Centre of Excellence As a joint project of NOVA and ESO, NEVEC will develop in the coming years the expertise to exploit the unique interferometric possibilities of the Very Large Telescope (VLT) - now being built on Paranal mountain in Chile. Its primary goals are the * development of instrument modeling, data reduction and calibration techniques for the VLTI; * accumulation of expertise relevant for second-generation VLTI instruments; and * education in the use of the VLTI and related matters. NEVEC will develop optical equipment, simulations and software to enable interferometry with VLT [1]. The new Center provides a strong impulse to Dutch participation in the VLTI. With direct involvement in this R&D work, the scientists at NOVA will be in the front row to do observations with this unique research facility, bound to produce top-level research and many exciting new discoveries. The ESO VLTI at Paranal ESO PR Photo 14a/00 ESO PR Photo 14a/00 [Preview - JPEG: 359 x 400 pix - 120k] [Normal - JPEG: 717 x 800 pix - 416k] [High-Res - JPEG: 2689 x 3000 pix - 6.7M] Caption : A view of the Paranal platform with the four 8.2-m VLT Unit Telescopes (UTs) and the foundations for the 1.8-m VLT Auxiliary Telescopes (ATs) that together will be used as the VLT Interferometer (VLTI). The three ATs will move on rails (yet to be installed) between the thirty observing stations above the holes that provide access to the underlying tunnel system. The light beams from the individual telescopes will be guided towards the centrally located, partly underground Interferometry Laboratory in which the VLTI instruments will be set up. This photo was obtained in December 1999 at which time some construction materials were still present on the platform; they were electronically removed in this reproduction. The ESO VLT facility at Paranal (Chile) consists of four Unit Telescopes with 8.2-m mirrors and several 1.8-m auxiliary telescopes that move on rails, cf. PR Photo 14a/00 . While each of the large telescopes can be used individually for astronomical observations, a prime feature of the VLT is the possibility to combine all of these telescopes into the Very Large Telescope Interferometer (VLTI) . In the interferometric mode, the light beams from the VLT telescopes are brought together at a common focal point in the Interferometry Laboratory that is placed at the centre of the observing platform on top of Paranal. In principle, this can be done in such a way that the resulting (reconstructed) image appears to come from a virtual telescope with a diameter that is equal to the largest distance between two of the individual telescopes, i.e., up to about 200 metres. The theoretically achievable image sharpness of an astronomical telescope is proportional to its diameter (or, for an interferometer, the largest distance between two of its component telescopes). The interferometric observing technique will thus allow the VLTI to produce images as sharp as 0.001 arcsec (at wavelength 1 µm) - this corresponds to viewing the shape of a golfball at more than 8,000 km distance. The VLTI will do even better when this technique is later extended to shorter wavelengths in the visible part of the spectrum - it may ultimately distinguish human-size objects on the surface of the Moon (a 2-metre object at this distance, about 400,000 km, subtends an angle of about 0.001 arcsec). However, interferometry with the VLT demands that the wavefronts of light from the individual telescopes that are up to 200 meters apart must be matched exactly, with less than 1 wavelength of difference. This demands continuous mechanical stability to a fraction of 1 µm (0.001 mm) for the heavy components over such large distances, and is a technically formidable challenge. This is achieved by electronic feed-back loops that measure and adjust the distances during the observations. In addition, continuous and automatic correction of image distortions from air turbulence in the telescopes' field of view is performed by means of adaptive optics [2]. VLTI technology at ESO, industry and institutes The VLT Interferometer is based on front-line technologies introduced and advanced by ESO, and its many parts are now being constructed at various sites in Europe. ESO PR Photo 14b/00 ESO PR Photo 14b/00 [Preview - JPEG: 359 x 400 pix - 72k] [Normal - JPEG: 717 x 800 pix - 200k] [High-Res - JPEG: 2687 x 3000 pix - 1.3M] Caption : Schematic lay-out of the VLT Interferometer. The light from a distant celestial objects enters two of the VLT telescopes and is reflected by the various mirrors into the Interferometric Tunnel, below the observing platform on the top of Paranal. Two Delay Lines with moveable carriages continuously adjust the length of the paths so that the two beams interfere constructively and produce fringes at the interferometric focus in the laboratory. In 1998, Fokker Space (also in Leiden, The Netherlands) was awarded a contract for the delivery of the three Delay Lines of the VLTI. This mechanical-optical system will compensate the optical path differences of the light beams from the individual telescopes. It is necessary to ensure that the light from all telescopes arrives in the same phase at the focal point of the interferometer. Otherwise, the very sharp interferometric images cannot be obtained. More details are available in the corresponding ESO PR 04/98 and recent video sequences, included in ESO Video News Reel No. 9 and Video Clip 04a/00 , cf. below. Also in 1998, the company AMOS (Liège, Belgium) was awarded an ESO contract for the delivery of the three 1.8-m Auxiliary Telescopes (ATs) and of the full set of on-site equipment for the 30 AT observing stations, cf. ESO PR Photos 25a-b/98. This work is now in progress at the factory - various scenes are incorporated into ESO Video News Reel No. 9 and Video Clip 04b/00 . Several instruments for imaging and spectroscopy are currently being developed for the VLTI. The first will be the VLT Interferometer Commissioning Instrument (VINCI) that is the test and first-light instrument for the VLT Interferometer. It is being built by a consortium of French and German institutes under ESO contract. The VLTI Near-Infrared / Red Focal Instrument (AMBER) is a collaborative project between five institutes in France, Germany and Italy, under ESO contract. It will operate with two 8.2-m UTs in the wavelength range between 1 and 2.5 µm during a first phase (2001-2003). The wavelength coverage will be extended in a second phase down to 0.6 µm (600 nm) at the time the ATs become operational. Main scientific objectives are the investigation at very high-angular resolution of disks and jets around young stellar objects and dust tori at active galaxy nuclei with spectroscopic observations. The Phase-Referenced Imaging and Microarcsecond Astrometry (PRIMA) device is managed by ESO and will allow simultaneous interferometric observations of two objects - each with a maximum size of 2 arcsec - and provide exceedingly accurate positional measurements. This will be of importance for many different kinds of astronomical investigations, for instance the search for planetary companions by means of accurate astrometry. The MID-Infrared interferometric instrument (MIDI) is a project collaboration between eight institutes in France, Germany and the Netherlands [1], under ESO contract. The actual design of MIDI is optimized for operation at 10 µm and a possible extension to 20 µm is being considered. Notes [1] The NEVEC Centre is involved in the MIDI project for the VLTI. Another joint project between ESO and NOVA is the Wide-Field Imager OMEGACAM for the VLT Survey Telescope (VST) that will be placed at Paranal. [2] Adaptive Optics systems allow to continuously "re-focus" an astronomical telescope in order to compensate for the atmospheric turbulence and thus to obtain the sharpest possible images. The work at ESO is described on the Adaptive Optics Team Homepage. VLTI-related videos now available In conjunction with the Inauguration of the NEVEC Centre (Leiden, The Netherlands) on May 26, 2000, ESO has issued ESO Video News Reel No. 9 (May 2000) ( "The Sharpest Vision - Interferometry with the VLT" ). Tapes with this VNR, suitable for transmission and in full professional quality (Betacam, etc.), are now available for broadcasters upon request; please contact the ESO EPR Department for more details. Extracts from this VNR are available as ESO Video Clips 04a/00 and 04b/00 . ESO PR Video Clip 04a/00 [160x120 pix MPEG-version] ESO PR Video Clip 04a/00 (2600 frames/1:44 min) [MPEG Video+Audio; 160x120 pix; 2.4Mb] [MPEG Video+Audio; 320x240 pix; 4.8 Mb] [RealMedia; streaming; 33kps] [RealMedia; streaming; 200kps] ESO Video Clip 04a/00 shows some recent tests with the prototype VLT Delay Line carriage at FOKKER Space (Leiden, The Netherlands. This device is crucial for the proper functioning of the VLTI and will be mounted in the main interferometric tunnel at Paranal. Contents: Outside view of the FOKKER site. The carriage on rails. The protecting cover is removed. View towards the cat's eye. The carriage moves on the rails. ESO PR Video Clip 04b/00 [160x120 pix MPEG-version] ESO PR Video Clip 04b/00 (3425 frames/2:17 min) [MPEG Video+Audio; 160x120 pix; 3.2Mb] [MPEG Video+Audio; 320x240 pix; 6.3 Mb] [RealMedia; streaming; 33kps] [RealMedia; streaming; 200kps] ESO Video Clip 04b/00 shows the construction of the 1.8-m VLT Auxiliary Telescopes at AMOS (Liège, Belgium). Contents: External view of the facility. Computer drawing of the mechanics. The 1.8-m mirror (graphics). Construction of the centerpiece of the telescope tube. Mechanical parts. Checking the optical shape of an 1.8-m mirror. Mirror cell with supports for the 1.8-m mirror. Test ramp with rails on which the telescope moves and an "observing station" (the hole). The telescope yoke that will support the telescope tube. Both clips are available in four versions: two MPEG files and two streamer-versions of different sizes; the latter require RealPlayer software. They may be freely reproduced if ESO is mentioned as source. Most of the ESO PR Video Clips at the ESO website provide "animated" illustrations of the ongoing work and events at the European Southern Observatory. The most recent clip was: ESO PR Video Clip 03/00 with a trailer for "Physics on Stage" (2 May 2000). Information is also available on the web about other ESO videos.
NASA Astrophysics Data System (ADS)
Worqlul, Abeyou W.; Ayana, Essayas K.; Maathuis, Ben H. P.; MacAlister, Charlotte; Philpot, William D.; Osorio Leyton, Javier M.; Steenhuis, Tammo S.
2018-01-01
In many developing countries and remote areas of important ecosystems, good quality precipitation data are neither available nor readily accessible. Satellite observations and processing algorithms are being extensively used to produce satellite rainfall products (SREs). Nevertheless, these products are prone to systematic errors and need extensive validation before to be usable for streamflow simulations. In this study, we investigated and corrected the bias of Multi-Sensor Precipitation Estimate-Geostationary (MPEG) data. The corrected MPEG dataset was used as input to a semi-distributed hydrological model Hydrologiska Byråns Vattenbalansavdelning (HBV) for simulation of discharge of the Gilgel Abay and Gumara watersheds in the Upper Blue Nile basin, Ethiopia. The result indicated that the MPEG satellite rainfall captured 81% and 78% of the gauged rainfall variability with a consistent bias of underestimating the gauged rainfall by 60%. A linear bias correction applied significantly reduced the bias while maintaining the coefficient of correlation. The simulated flow using bias corrected MPEG SRE resulted in a simulated flow comparable to the gauge rainfall for both watersheds. The study indicated the potential of MPEG SRE in water budget studies after applying a linear bias correction.
Zhang, Guolin; Ma, Jianbiao; Li, Yanhong; Wang, Yinong
2003-01-01
Di-block co-polymers of poly(L-alanine) with poly(ethylene glycol) monomethyl ether (MPEG) were synthesized as amphiphilic biodegradable co-polymers. The ring-opening polymerization of N-carboxy-L-alanine anhydride (NCA) in dichloromethane was initiated by amino-terminated poly(ethylene glycol) monomethyl ether (MPEG-NH2, M(n) = 2000) to afford poly(L-alanine)-block-MPEG. The weight ratio of two blocks in the co-polymers could be altered by adjusting the feeding ratio of NCA to MPEG-NH2. Their chemical structures were characterized on the basis of infrared spectrometry and nuclear magnetic resonance. According to circular dichroism measurement, the poly(L-alanine) chain on the co-polymers in an aqueous medium had a alpha-helix conformation. Two melting points from MPEG block and poly(L-alanine), respectively, could be observed in differential scanning calorimetry curves of the co-polymers, suggesting that a micro-domain phase separation appeared in their bulky states. The co-polymers could take up some water and the capacity was dependent on the ratio of poly(L-alanine) block to MPEG. Such co-polymers might be useful in drug-delivery systems and other biomedical applications.
Mpeg2 codec HD improvements with medical and robotic imaging benefits
NASA Astrophysics Data System (ADS)
Picard, Wayne F. J.
2010-02-01
In this report, we propose an efficient scheme to use High Definition Television (HDTV) in a console or notebook format as a computer terminal in addition to their role as TV display unit. In the proposed scheme, we assume that the main computer is situated at a remote location. The computer raster in the remote server is compressed using an HD E- >Mpeg2 encoder and transmitted to the terminal at home. The built-in E->Mpeg2 decoder in the terminal decompresses the compressed bit stream, and displays the raster. The terminal will be fitted with a mouse and keyboard, through which the interaction with the remote computer server can be performed via a communications back channel. The terminal in a notebook format can thus be used as a high resolution computer and multimedia device. We will consider developments such as the required HD enhanced Mpeg2 resolution (E->Mpeg2) and its medical ramifications due to improvements on compressed image quality with 2D to 3D conversion (Mpeg3) and using the compressed Discrete Cosine Transform coefficients in the reality compression of vision and control of medical robotic surgeons.
Ulusoy, Mehriban; Jonczyk, Rebecca; Walter, Johanna-Gabriela; Springer, Sergej; Lavrentieva, Antonina; Stahl, Frank; Green, Mark; Scheper, Thomas
2016-02-17
Ligands used on the surface of colloidal nanoparticles (NPs) have a significant impact on physiochemical properties of NPs and their interaction in biological environments. In this study, we report a one-pot aqueous synthesis of 3-mercaptopropionic acid (MPA)-functionalized CdTe/CdS/ZnS quantum dots (Qdots) in the presence of thiol-terminated methoxy polyethylene glycol (mPEG) molecules as a surface coordinating ligand. The resulting mPEG-Qdots were characterized by using ζ potential, FTIR, thermogravimetric (TG) analysis, and microscale thermophoresis (MST) studies. We investigated the effect of mPEG molecules and their grafting density on the Qdots photophysical properties, colloidal stability, protein binding affinity, and in vitro cellular toxicity. Moreover, cellular binding features of the resulting Qdots were examined by using three-dimensional (3D) tumor-like spheroids, and the results were discussed in detail. Promisingly, mPEG ligands were found to increase colloidal stability of Qdots, reduce adsorption of proteins to the Qdot surface, and mitigate Qdot-induced side effects to a great extent. Flow cytometry and confocal microscopy studies revealed that PEGylated Qdots exhibited distinctive cellular interactions with respect to their mPEG grafting density. As a result, mPEG molecules demonstrated a minimal effect on the ZnS shell deposition and the Qdot fluorescence efficiency at a low mPEG density, whereas they showed pronounced effect on Qdot colloidal stability, protein binding affinity, cytotoxicity, and nonspecific binding at a higher mPEG grafting amount.
2015-01-01
Developing surface coatings for NaLnF4 nanoparticles (NPs) that provide long-term stability in solutions containing competitive ions such as phosphate remains challenging. An amine-functional polyamidoamine tetraphosphonate (NH2-PAMAM-4P) as a multidentate ligand for these NPs has been synthesized and characterized as a ligand for the surface of NaGdF4 and NaTbF4 nanoparticles. A two-step ligand exchange protocol was developed for introduction of the NH2-PAMAM-4P ligand on oleate-capped NaLnF4 NPs. The NPs were first treated with methoxy-poly(ethylene glycol)-monophosphoric acid (Mn = 750) in tetrahydrofuran. The mPEG750-OPO3-capped NPs were stable colloidal solutions in water, where they could be ligand-exchanged with NH2-PAMAM-4P. The surface amine groups on the NPs were available for derivatization to attach methoxy-PEG (Mn = 2000) and biotin-terminated PEG (Mn = 2000) chains. The surface coverage of ligands on the NPs was examined by thermal gravimetric analysis, and by a HABA analysis for biotin-containing NPs. Colloidal stability of the NPs was examined by dynamic light scattering. NaGdF4 and NaTbF4 NPs capped with mPEG2000–PAMAM-4P showed colloidal stability in DI water and in phosphate buffer (10 mM, pH 7.4). A direct comparison with NaTbF4 NPs capped with a mPEG2000-lysine-based tetradentate ligand that we reported previously (Langmuir2012, 28, 12861−1287022906305) showed that both ligands provided long-term stability in phosphate buffer, but that the lysine-based ligand provided better stability in phosphate-buffered saline. PMID:24898128
High-throughput sample adaptive offset hardware architecture for high-efficiency video coding
NASA Astrophysics Data System (ADS)
Zhou, Wei; Yan, Chang; Zhang, Jingzhi; Zhou, Xin
2018-03-01
A high-throughput hardware architecture for a sample adaptive offset (SAO) filter in the high-efficiency video coding video coding standard is presented. First, an implementation-friendly and simplified bitrate estimation method of rate-distortion cost calculation is proposed to reduce the computational complexity in the mode decision of SAO. Then, a high-throughput VLSI architecture for SAO is presented based on the proposed bitrate estimation method. Furthermore, multiparallel VLSI architecture for in-loop filters, which integrates both deblocking filter and SAO filter, is proposed. Six parallel strategies are applied in the proposed in-loop filters architecture to improve the system throughput and filtering speed. Experimental results show that the proposed in-loop filters architecture can achieve up to 48% higher throughput in comparison with prior work. The proposed architecture can reach a high-operating clock frequency of 297 MHz with TSMC 65-nm library and meet the real-time requirement of the in-loop filters for 8 K × 4 K video format at 132 fps.
Zhao, Guangyao; Tong, Lemuel; Cao, Pengpeng; Nitz, Mark; Winnik, Mitchell A
2014-06-17
Developing surface coatings for NaLnF4 nanoparticles (NPs) that provide long-term stability in solutions containing competitive ions such as phosphate remains challenging. An amine-functional polyamidoamine tetraphosphonate (NH2-PAMAM-4P) as a multidentate ligand for these NPs has been synthesized and characterized as a ligand for the surface of NaGdF4 and NaTbF4 nanoparticles. A two-step ligand exchange protocol was developed for introduction of the NH2-PAMAM-4P ligand on oleate-capped NaLnF4 NPs. The NPs were first treated with methoxy-poly(ethylene glycol)-monophosphoric acid (M(n) = 750) in tetrahydrofuran. The mPEG750-OPO3-capped NPs were stable colloidal solutions in water, where they could be ligand-exchanged with NH2-PAMAM-4P. The surface amine groups on the NPs were available for derivatization to attach methoxy-PEG (M(n) = 2000) and biotin-terminated PEG (M(n) = 2000) chains. The surface coverage of ligands on the NPs was examined by thermal gravimetric analysis, and by a HABA analysis for biotin-containing NPs. Colloidal stability of the NPs was examined by dynamic light scattering. NaGdF4 and NaTbF4 NPs capped with mPEG2000-PAMAM-4P showed colloidal stability in DI water and in phosphate buffer (10 mM, pH 7.4). A direct comparison with NaTbF4 NPs capped with a mPEG2000-lysine-based tetradentate ligand that we reported previously (Langmuir 2012, 28, 12861-12870) showed that both ligands provided long-term stability in phosphate buffer, but that the lysine-based ligand provided better stability in phosphate-buffered saline.
Li, Zhenbao; Han, Xiaopeng; Zhai, Yinglei; Lian, He; Zhang, Dong; Zhang, Wenjuan; Wang, Yongjun; He, Zhonggui; Liu, Zheng; Sun, Jin
2015-06-01
Pegylation method is widely used to prolong the blood circulation time of proteins and nanoparticles after intravenous administration, but the effect of surface poly (ethylene glycol) (PEG) chain length on oral absorption of the pegylated nanoparticles is poorly reported. The aim of our study was to investigate the influence of PEG corona chain length on membrane permeability and oral bioavailability of the amphiphilic pegylated prodrug-based nanomicelles, taking all trans-retinoic acid (ATRA) as a model drug. The amphiphilic ATRA-PEG conjugates were synthesized by esterification reaction between all trans-retinoic acid and mPEGs (mPEG500, mPEG1000, mPEG2000, and mPEG5000). The conjugates could self-assemble in aqueous medium to form nanomicelles by emulsion-solvent evaporation method. The resultant nanomicelles were in spherical shape with an average diameter of 13-20 nm. The drug loading efficiency of ATRA-PEG500, ATRA-PEG1000, ATRA-PEG2000, and ATRA-PEG5000 was about 38.4, 26.6, 13.1, and 5.68 wt%, respectively. With PEG chain length ranging from 500 to 5000, ATRA-PEG nanomicelles exhibited a bell shape of chemical stability in different pH buffers, intestinal homogenate and plasma. More importantly, they were all rapidly hydrolyzed into the parent drug in hepatic homogenate, with the half-time values being 0.3-0.4h. In comparison to ATRA solution and ATRA prodrug-based nanomicelles, ATRA-PEG1000 showed the highest intestinal permeability. After oral administration, ATRA-PEG2000 and ATRA-PEG5000 nanomicelles were not nearly absorbed, while the oral bioavailability of ATRA-PEG500 and ATRA-PEG1000 demonstrated about 1.2- and 2.0-fold higher than ATRA solution. Our results indicated that PEG1000 chain length of ATRA-PEG prodrug nanomicelles has the optimal oral bioavailability probably due to improved stability and balanced mucus penetration capability and cell binding, and that the PEG chain length on a surface of nanoparticles cannot exceed a key threshold with the purpose of enhancement in oral bioavailability. Copyright © 2015. Published by Elsevier B.V.
Zhang, Can Yang; Xiong, Di; Sun, Yao; Zhao, Bin; Lin, Wen Jing; Zhang, Li Juan
2014-01-01
A novel amphiphilic triblock pH-sensitive poly(β-amino ester)-g-poly(ethylene glycol) methyl ether-cholesterol (PAE-g-MPEG-Chol) was designed and synthesized via the Michael-type step polymerization and esterification condensation method. The synthesized copolymer was determined with proton nuclear magnetic resonance and gel permeation chromatography. The grafting percentages of MPEG and cholesterol were determined as 10.93% and 62.02%, calculated from the area of the characteristic peaks, respectively. The amphiphilic copolymer was confirmed to self-assemble into core/shell micelles in aqueous solution at low concentrations. The critical micelle concentrations were 6.92 and 15.14 mg/L at pH of 7.4 and 6.0, respectively, obviously influenced by the changes of pH values. The solubility of pH-responsive PAE segment could be transformed depending on the different values of pH because of protonation-deprotonation of the amino groups, resulting in pH sensitivity of the copolymer. The average particle size of micelles increased from 125 nm to 165 nm with the pH decreasing, and the zeta potential was also significantly changed. Doxorubicin (DOX) was entrapped into the polymeric micelles with a high drug loading level. The in vitro DOX release from the micelles was distinctly enhanced with the pH decreasing from 7.4 to 6.0. Toxicity testing proved that the DOX-loaded micelles exhibited high cytotoxicity in HepG2 cells, whereas the copolymer showed low toxicity. The results demonstrated how pH-sensitive PAE-g-MPEG-Chol micelles were proved to be a potential vector in hydrophobic drug delivery for tumor therapy.
Zhang, Can Yang; Xiong, Di; Sun, Yao; Zhao, Bin; Lin, Wen Jing; Zhang, Li Juan
2014-01-01
A novel amphiphilic triblock pH-sensitive poly(β-amino ester)-g-poly(ethylene glycol) methyl ether-cholesterol (PAE-g-MPEG-Chol) was designed and synthesized via the Michael-type step polymerization and esterification condensation method. The synthesized copolymer was determined with proton nuclear magnetic resonance and gel permeation chromatography. The grafting percentages of MPEG and cholesterol were determined as 10.93% and 62.02%, calculated from the area of the characteristic peaks, respectively. The amphiphilic copolymer was confirmed to self-assemble into core/shell micelles in aqueous solution at low concentrations. The critical micelle concentrations were 6.92 and 15.14 mg/L at pH of 7.4 and 6.0, respectively, obviously influenced by the changes of pH values. The solubility of pH-responsive PAE segment could be transformed depending on the different values of pH because of protonation–deprotonation of the amino groups, resulting in pH sensitivity of the copolymer. The average particle size of micelles increased from 125 nm to 165 nm with the pH decreasing, and the zeta potential was also significantly changed. Doxorubicin (DOX) was entrapped into the polymeric micelles with a high drug loading level. The in vitro DOX release from the micelles was distinctly enhanced with the pH decreasing from 7.4 to 6.0. Toxicity testing proved that the DOX-loaded micelles exhibited high cytotoxicity in HepG2 cells, whereas the copolymer showed low toxicity. The results demonstrated how pH-sensitive PAE-g-MPEG-Chol micelles were proved to be a potential vector in hydrophobic drug delivery for tumor therapy. PMID:25364250
Progressive video coding for noisy channels
NASA Astrophysics Data System (ADS)
Kim, Beong-Jo; Xiong, Zixiang; Pearlman, William A.
1998-10-01
We extend the work of Sherwood and Zeger to progressive video coding for noisy channels. By utilizing a 3D extension of the set partitioning in hierarchical trees (SPIHT) algorithm, we cascade the resulting 3D SPIHT video coder with a rate-compatible punctured convolutional channel coder for transmission of video over a binary symmetric channel. Progressive coding is achieved by increasing the target rate of the 3D embedded SPIHT video coder as the channel condition improves. The performance of our proposed coding system is acceptable at low transmission rate and bad channel conditions. Its low complexity makes it suitable for emerging applications such as video over wireless channels.
Camouflaging endothelial cells: does it prolong graft survival?
Stuhlmeier, K M; Lin, Y
1999-08-05
Camouflaging antigens on the surface of cells seems an appealing way to prevent activation of the immune system. We explored the possibility of preventing hyperacute rejection by chemically camouflaging endothelial cells (EC). In vitro as well as in vivo experiments were performed. First, the ability of mPEG coating to prevent antibody-antigen interactions was evaluated. Second, we tested the degree to which mPEG coating prevents activation of EC by stimuli such as TNF-alpha and LPS. Third, in vivo experiments were performed to test the ability of mPEG coating to prolong xenograft survival. We demonstrate that binding of several antibodies to EC or serum proteins can be inhibited by mPEG. Furthermore, binding of TNF-alpha as well as LPS to EC is blocked since mPEG treatment of EC inhibits the subsequent up-regulation of E-selectin by these stimuli. However, in vivo experiments revealed that currently this method alone is not sufficient to prevent hyperacute rejection.
Beugin, S; Edwards, K; Karlsson, G; Ollivon, M; Lesieur, S
1998-01-01
Monomethoxypoly(ethylene glycol) cholesteryl carbonates (M-PEG-Chol) with polymer chain molecular weights of 1000 (M-PEG1000-Chol) and 2000 (M-PEG2000-Chol) have been newly synthesized and characterized. Their aggregation behavior in mixture with diglycerol hexadecyl ether (C16G2) and cholesterol has been examined by cryotransmission electron microscopy, high-performance gel exclusion chromatography, and quasielastic light scattering. Nonaggregated, stable, unilamellar vesicles were obtained at low polymer levels with optimal shape and size homogeneity at cholesteryl conjugate/ lipids ratios of 10 mol% M-PEG1000-Chol or 5 mol% M-PEG2000-Chol, corresponding to the theoretically predicted brush conformational state of the PEG chains. At 20 mol% M-PEG1000-Chol or 10 mol% M-PEG2000-Chol, the saturation threshold of the C16G2/cholesterol membrane in polymer is exceeded, and open disk-shaped aggregates are seen in coexistence with closed vesicles. Higher levels up to 30 mol% lead to the complete solubilization of the vesicles into disk-like structures of decreasing size with increasing PEG content. This study underlines the bivalent role of M-PEG-Chol derivatives: while behaving as solubilizing surfactants, they provide an efficient steric barrier, preventing the vesicles from aggregation and fusion over a period of at least 2 weeks. PMID:9635773
Jangö, Hanna; Gräs, Søren; Christensen, Lise; Lose, Gunnar
2017-02-01
Alternative approaches to reinforce the native tissue in patients with pelvic organ prolapse (POP) are needed to improve surgical outcome. Our aims were to develop a weakened abdominal wall in a rat model to mimic the weakened vaginal wall in women with POP and then evaluate the regenerative potential of a quickly biodegradable synthetic scaffold, methoxypolyethylene glycol polylactic-co-glycolic acid (MPEG-PLGA), seeded with autologous muscle fiber fragments (MFFs) using this model. In an initial pilot study with 15 animals, significant weakening of the abdominal wall and a feasible technique was established by creating a partial defect with removal of one abdominal muscle layer. Subsequently, 18 rats were evenly divided into three groups: (1) unrepaired partial defect; (2) partial defect repaired with MPEG-PLGA; (3) partial defect repaired with MPEG-PLGA and MFFs labeled with PKH26-fluorescence dye. After 8 weeks, we performed histopathological and immunohistochemical testing, fluorescence analysis, and uniaxial biomechanical testing. Both macroscopically and microscopically, the MPEG-PLGA scaffold was fully degraded, with no signs of an inflammatory or foreign-body response. PKH26-positive cells were found in all animals from the group with added MFFs. Analysis of variance (ANOVA) showed a significant difference between groups with respect to load at failure (p = 0.028), and post hoc testing revealed that the group with MPEG-PLGA and MFFs showed a significantly higher strength than the group with MPEG-PLGA alone (p = 0.034). Tissue-engineering with MFFs seeded on a scaffold of biodegradable MPEG-PLGA might be an interesting adjunct to future POP repair.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaunt, R.
1997-05-01
An international standard has emerged for the first true multimedia format. Digital Versatile Disk (by its official name), you may know it as Digital Video Disks. DVD has applications in movies, music, games, information CD-ROMS, and many other areas where massive amounts of digital information is needed. Did I say massive amounts of data? Would you believe over 17 gigabytes on a single piece of plastic the size of an audio-CD? That`s the promise, at least, by the group of nine electronics manufacturers who have agreed to the format specification, and who hope to make this goal a reality bymore » 1998. In this major agreement, which didn`t come easily, the manufacturers will combine Sony and Phillip`s one side double-layer NMCD format with Toshiba and Matsushita`s double sided Super-Density disk. By Spring of this year, they plan to market the first 4.7 gigabyte units. The question is: Will DVD take off? Some believe that read-only disks recorded with movies will be about as popular as video laser disks. They say that until the eraseable/writable DVD arrives, the consumer will most likely not buy it. Also, DVD has a good market for replacement of CD- Roms. Back in the early 80`s, the international committee deciding the format of the audio compact disk decided its length would be 73 minutes. This, they declared, would allow Beethoven`s 9th Symphony to be contained entirely on a single CD. Similarly, today it was agreed that playback length of a single sided, single layer DVD would be 133 minutes, long enough to hold 94% of all feature-length movies. Further, audio can be in Dolby`s AC-3 stereo or 5.1 tracks of surround sound, better than CD-quality audio (16-bits at 48kHz). In addition, there are three to five language tracks, copy protection and parental ``locks`` for R rated movies. DVD will be backwards compatible with current CD-ROM and audio CD formats. Added versatility comes by way of multiple aspect rations: 4:3 pan-scan, 4:3 letterbox, and 16:9 widescreen. MPEG-2 is the selected image compression format, with full ITU Rec. 601 video resolution (72Ox480). MPEG-2 and AC-3 are also part of the U.S. high definition Advance Television standard (ATV). DVD has an average video bit rate of 3.5 Mbits/sec or 4.69Mbits/sec for image and sound. Unlike digital television transmission, which will use fixed length packets for audio and video, DVD will use variable length packets with a maximum throughput of more than 1OMbits/sec. The higher bit rate allows for less compression of difficult to encode material. Even with all the compression, narrow-beam red light lasers are required to significantly increase the physical data density of a platter by decreasing the size of the pits. This allows 4.7 gigabytes of data on a single sided, single layer DVD. The maximum 17 gigabyte capacity is achieved by employing two reflective layers on both sides of the disk. To read the imbedded layer of data, the laser`s focal length is altered so that the top layer pits are not picked up by the reader. It will be a couple of years before we have dual-layer, double-sided DVDS, and it will be achieved in four stages. The first format to appear will be the single sided, single layer disk (4.7 gigabytes). That will allow Hollywood to begin releasing DVD movie titles. DVD-ROM will be the next phase, allowing 4.7 gigabytes of CD-ROM-like content. The third stage will be write-once disks, and stage four will be rewritable disks. These last stages presents some issues which have yet to be resolved. For one, copyrighted materials may have some form of payment system, and there is the issue that erasable disks reflect less light than today`s DVDS. The problem here is that their data most likely will not be readable on earlier built players.« less
Yin, Lei; Su, Chong; Ren, Tianming; Meng, Xiangjun; Shi, Meiyun; Paul Fawcett, J; Zhang, Mengliang; Hu, Wei; Gu, Jingkai
2017-11-06
The covalent attachment of polyethylene glycol (PEG) to therapeutic compounds (known as PEGylation) is one of the most promising techniques to improve the biological efficacy of small molecular weight drugs. After administration, PEGylated prodrugs can be metabolized into pharmacologically active compounds so that PEGylated drug, free drug and released PEG are present simultaneously in the body. Understanding the pharmacokinetic behavior of these three compounds is needed to guide the development of pegylated theranostic agents. However, PEGs are polydisperse molecules with a wide range of molecular weights, so that the simultaneous quantitation of PEGs and PEGylated molecules in biological matrices is very challenging. This article reports the application of a data-independent acquisition method (MS All ) based on liquid chromatography electrospray ionization quadrupole time-of-flight mass spectrometry (LC-q-q-TOF-MS) in the positive ion mode to the simultaneous determination of methoxyPEG2000-doxorubicin (mPEG2K-Dox) and its breakdown products in rat blood. Using the MS All technique, precursor ions of all molecules are generated in q1, fragmented to product ions in q2 (collision cell), and subjected to TOF separation before precursor and product ions are recorded using low and high collision energies (CE) respectively in different experiments for a single sample injection. In this study, dissociation in q2 generated a series of high resolution PEG-related product ions at m/z 89.0611, 133.0869, 177.1102, 221.1366, 265.1622, 309.1878, and 353.2108 corresponding to fragments containing various numbers of ethylene oxide subunits, Dox-related product ions at m/z 321.0838 and 361.0785, and an mPEG2K-Dox specific product ion at m/z 365.0735. Detection of mPEGs and mPEG2K-Dox was based on high resolution extracted ions of mPEG and the specific compound. The method was successfully applied to a pharmacokinetic study of doxorubicin, mPEG2K (methylated polyethylene glycol 2K), and mPEG2K-doxorubicin in rats after a single intravenous injection of mPEG2K-doxorubicin. To the best of our knowledge, this is the first assay that simultaneously determines mPEG, Dox, and mPEG2K-Dox in a biological matrix. We believe the MS All technique as applied in this study can be potentially extended to the determination of other PEGylated small molecules or polymeric compounds.
Simulator sickness analysis of 3D video viewing on passive 3D TV
NASA Astrophysics Data System (ADS)
Brunnström, K.; Wang, K.; Andrén, B.
2013-03-01
The MPEG 3DV project is working on the next generation video encoding standard and in this process a call for proposal of encoding algorithms was issued. To evaluate these algorithm a large scale subjective test was performed involving Laboratories all over the world. For the participating Labs it was optional to administer a slightly modified Simulator Sickness Questionnaire (SSQ) from Kennedy et al (1993) before and after the test. Here we report the results from one Lab (Acreo) located in Sweden. The videos were shown on a 46 inch film pattern retarder 3D TV, where the viewers were using polarized passive eye-glasses to view the stereoscopic 3D video content. There were 68 viewers participating in this investigation in ages ranges from 16 to 72, with one third females. The questionnaire was filled in before and after the test, with a viewing time ranging between 30 min to about one and half hour, which is comparable to a feature length movie. The SSQ consists of 16 different symptoms that have been identified as important for indicating simulator sickness. When analyzing the individual symptoms it was found that Fatigue, Eye-strain, Difficulty Focusing and Difficulty Concentrating were significantly worse after than before. SSQ was also analyzed according to the model suggested by Kennedy et al (1993). All in all this investigation shows a statistically significant increase in symptoms after viewing 3D video especially related to visual or Oculomotor system.
Cardiac ultrasonography over 4G wireless networks using a tele-operated robot
Panayides, Andreas S.; Jossif, Antonis P.; Christoforou, Eftychios G.; Vieyres, Pierre; Novales, Cyril; Voskarides, Sotos; Pattichis, Constantinos S.
2016-01-01
This Letter proposes an end-to-end mobile tele-echography platform using a portable robot for remote cardiac ultrasonography. Performance evaluation investigates the capacity of long-term evolution (LTE) wireless networks to facilitate responsive robot tele-manipulation and real-time ultrasound video streaming that qualifies for clinical practice. Within this context, a thorough video coding standards comparison for cardiac ultrasound applications is performed, using a data set of ten ultrasound videos. Both objective and subjective (clinical) video quality assessment demonstrate that H.264/AVC and high efficiency video coding standards can achieve diagnostically-lossless video quality at bitrates well within the LTE supported data rates. Most importantly, reduced latencies experienced throughout the live tele-echography sessions allow the medical expert to remotely operate the robot in a responsive manner, using the wirelessly communicated cardiac ultrasound video to reach a diagnosis. Based on preliminary results documented in this Letter, the proposed robotised tele-echography platform can provide for reliable, remote diagnosis, achieving comparable quality of experience levels with in-hospital ultrasound examinations. PMID:27733929
Adaptive rood pattern search for fast block-matching motion estimation.
Nie, Yao; Ma, Kai-Kuang
2002-01-01
In this paper, we propose a novel and simple fast block-matching algorithm (BMA), called adaptive rood pattern search (ARPS), which consists of two sequential search stages: 1) initial search and 2) refined local search. For each macroblock (MB), the initial search is performed only once at the beginning in order to find a good starting point for the follow-up refined local search. By doing so, unnecessary intermediate search and the risk of being trapped into local minimum matching error points could be greatly reduced in long search case. For the initial search stage, an adaptive rood pattern (ARP) is proposed, and the ARP's size is dynamically determined for each MB, based on the available motion vectors (MVs) of the neighboring MBs. In the refined local search stage, a unit-size rood pattern (URP) is exploited repeatedly, and unrestrictedly, until the final MV is found. To further speed up the search, zero-motion prejudgment (ZMP) is incorporated in our method, which is particularly beneficial to those video sequences containing small motion contents. Extensive experiments conducted based on the MPEG-4 Verification Model (VM) encoding platform show that the search speed of our proposed ARPS-ZMP is about two to three times faster than that of the diamond search (DS), and our method even achieves higher peak signal-to-noise ratio (PSNR) particularly for those video sequences containing large and/or complex motion contents.
Design and implementation of H.264 based embedded video coding technology
NASA Astrophysics Data System (ADS)
Mao, Jian; Liu, Jinming; Zhang, Jiemin
2016-03-01
In this paper, an embedded system for remote online video monitoring was designed and developed to capture and record the real-time circumstances in elevator. For the purpose of improving the efficiency of video acquisition and processing, the system selected Samsung S5PV210 chip as the core processor which Integrated graphics processing unit. And the video was encoded with H.264 format for storage and transmission efficiently. Based on S5PV210 chip, the hardware video coding technology was researched, which was more efficient than software coding. After running test, it had been proved that the hardware video coding technology could obviously reduce the cost of system and obtain the more smooth video display. It can be widely applied for the security supervision [1].
Automatic view synthesis by image-domain-warping.
Stefanoski, Nikolce; Wang, Oliver; Lang, Manuel; Greisen, Pierre; Heinzle, Simon; Smolic, Aljosa
2013-09-01
Today, stereoscopic 3D (S3D) cinema is already mainstream, and almost all new display devices for the home support S3D content. S3D distribution infrastructure to the home is already established partly in the form of 3D Blu-ray discs, video on demand services, or television channels. The necessity to wear glasses is, however, often considered as an obstacle, which hinders broader acceptance of this technology in the home. Multiviewautostereoscopic displays enable a glasses free perception of S3D content for several observers simultaneously, and support head motion parallax in a limited range. To support multiviewautostereoscopic displays in an already established S3D distribution infrastructure, a synthesis of new views from S3D video is needed. In this paper, a view synthesis method based on image-domain-warping (IDW) is presented that automatically synthesizes new views directly from S3D video and functions completely. IDW relies on an automatic and robust estimation of sparse disparities and image saliency information, and enforces target disparities in synthesized images using an image warping framework. Two configurations of the view synthesizer in the scope of a transmission and view synthesis framework are analyzed and evaluated. A transmission and view synthesis system that uses IDW is recently submitted to MPEG's call for proposals on 3D video technology, where it is ranked among the four best performing proposals.
NASA Astrophysics Data System (ADS)
da Silva, Thaísa Leal; Agostini, Luciano Volcan; da Silva Cruz, Luis A.
2014-05-01
Intra prediction is a very important tool in current video coding standards. High-efficiency video coding (HEVC) intra prediction presents relevant gains in encoding efficiency when compared to previous standards, but with a very important increase in the computational complexity since 33 directional angular modes must be evaluated. Motivated by this high complexity, this article presents a complexity reduction algorithm developed to reduce the HEVC intra mode decision complexity targeting multiview videos. The proposed algorithm presents an efficient fast intra prediction compliant with singleview and multiview video encoding. This fast solution defines a reduced subset of intra directions according to the video texture and it exploits the relationship between prediction units (PUs) of neighbor depth levels of the coding tree. This fast intra coding procedure is used to develop an inter-view prediction method, which exploits the relationship between the intra mode directions of adjacent views to further accelerate the intra prediction process in multiview video encoding applications. When compared to HEVC simulcast, our method achieves a complexity reduction of up to 47.77%, at the cost of an average BD-PSNR loss of 0.08 dB.
Imran, Noreen; Seet, Boon-Chong; Fong, A C M
2015-01-01
Distributed video coding (DVC) is a relatively new video coding architecture originated from two fundamental theorems namely, Slepian-Wolf and Wyner-Ziv. Recent research developments have made DVC attractive for applications in the emerging domain of wireless video sensor networks (WVSNs). This paper reviews the state-of-the-art DVC architectures with a focus on understanding their opportunities and gaps in addressing the operational requirements and application needs of WVSNs.
[Study on the stability of chicken egg yolk immunoglobulin (IgY) modified with mPEG].
Wang, Li-Ying; Ma, Mei-Hu; Huang, Qun; Shi, Xiao-Xia
2012-09-01
The objective of the present paper was to study the effect of monomethoxypolyethlene glycol (mPEG) modification on the stability of chicken IgY and compare the stability of the modification products by Fourier transform infrared spectroscopy (FTIR), CD spectrooscopy and fluorescence spectroscopy. NHS-mPEG was used to modify IgY after mPEG was activated with N-hydroxysuccinimide (NHS). The optimal reaction condition for modification was 1:10 molar rate of IgY to mPEG at pH 7, reaction for 1 h, and the product was obtained with modification rate of 20.56% and activity reservation of 87. 62%. In addition, the thermal and pH stability of IgY and mPEG-IgY was compared by spectroscopic methods. The results showed that the alpha-helix, beta-sheet, beta-turn, and random content of IgY changed from 14.5%, 42.1%, 6.2% and 37.2% to 1.6%, 55.25%, 5.8% and 37.5%, while mPEG changed from 12.9%, 42.7%, 6.3% and 38. 1% to 3.1%, 50.5%, 7.2% and 39.2%, respectively, after incubating for 120 min at 70 degrees C. For the treatment with acid-base, similarly, the structure changes of mPEG-IgY were smaller than IgY. Thus, it is indicated that IgY modified by mPEG had greater stable properties.
Portrayal of Alcohol Brands Popular Among Underage Youth on YouTube: A Content Analysis.
Primack, Brian A; Colditz, Jason B; Rosen, Eva B; Giles, Leila M; Jackson, Kristina M; Kraemer, Kevin L
2017-09-01
We characterized leading YouTube videos featuring alcohol brand references and examined video characteristics associated with each brand and video category. We systematically captured the 137 most relevant and popular videos on YouTube portraying alcohol brands that are popular among underage youth. We used an iterative process to codebook development. We coded variables within domains of video type, character sociodemographics, production quality, and negative and positive associations with alcohol use. All variables were double coded, and Cohen's kappa was greater than .80 for all variables except age, which was eliminated. There were 96,860,936 combined views for all videos. The most common video type was "traditional advertisements," which comprised 40% of videos. Of the videos, 20% were "guides" and 10% focused on chugging a bottle of distilled spirits. While 95% of videos featured males, 40% featured females. Alcohol intoxication was present in 19% of videos. Aggression, addiction, and injuries were uncommonly identified (2%, 3%, and 4%, respectively), but 47% of videos contained humor. Traditional advertisements represented the majority of videos related to Bud Light (83%) but only 18% of Grey Goose and 8% of Hennessy videos. Intoxication was most present in chugging demonstrations (77%), whereas addiction was only portrayed in music videos (22%). Videos containing humor ranged from 11% for music-related videos to 77% for traditional advertisements. YouTube videos depicting the alcohol brands favored by underage youth are heavily viewed, and the majority are traditional or narrative advertisements. Understanding characteristics associated with different brands and video categories may aid in intervention development.
Adaptive format conversion for scalable video coding
NASA Astrophysics Data System (ADS)
Wan, Wade K.; Lim, Jae S.
2001-12-01
The enhancement layer in many scalable coding algorithms is composed of residual coding information. There is another type of information that can be transmitted instead of (or in addition to) residual coding. Since the encoder has access to the original sequence, it can utilize adaptive format conversion (AFC) to generate the enhancement layer and transmit the different format conversion methods as enhancement data. This paper investigates the use of adaptive format conversion information as enhancement data in scalable video coding. Experimental results are shown for a wide range of base layer qualities and enhancement bitrates to determine when AFC can improve video scalability. Since the parameters needed for AFC are small compared to residual coding, AFC can provide video scalability at low enhancement layer bitrates that are not possible with residual coding. In addition, AFC can also be used in addition to residual coding to improve video scalability at higher enhancement layer bitrates. Adaptive format conversion has not been studied in detail, but many scalable applications may benefit from it. An example of an application that AFC is well-suited for is the migration path for digital television where AFC can provide immediate video scalability as well as assist future migrations.
NASA Astrophysics Data System (ADS)
Liu, Mei-Feng; Zhong, Guo-Yun; He, Xiao-Hai; Qing, Lin-Bo
2016-09-01
Currently, most video resources on line are encoded in the H.264/AVC format. More fluent video transmission can be obtained if these resources are encoded in the newest international video coding standard: high efficiency video coding (HEVC). In order to improve the video transmission and storage on line, a transcoding method from H.264/AVC to HEVC is proposed. In this transcoding algorithm, the coding information of intraprediction, interprediction, and motion vector (MV) in H.264/AVC video stream are used to accelerate the coding in HEVC. It is found through experiments that the region of interprediction in HEVC overlaps that in H.264/AVC. Therefore, the intraprediction for the region in HEVC, which is interpredicted in H.264/AVC, can be skipped to reduce coding complexity. Several macroblocks in H.264/AVC are combined into one PU in HEVC when the MV difference between two of the macroblocks in H.264/AVC is lower than a threshold. This method selects only one coding unit depth and one prediction unit (PU) mode to reduce the coding complexity. An MV interpolation method of combined PU in HEVC is proposed according to the areas and distances between the center of one macroblock in H.264/AVC and that of the PU in HEVC. The predicted MV accelerates the motion estimation for HEVC coding. The simulation results show that our proposed algorithm achieves significant coding time reduction with a little loss in bitrates distortion rate, compared to the existing transcoding algorithms and normal HEVC coding.
Noise level and MPEG-2 encoder statistics
NASA Astrophysics Data System (ADS)
Lee, Jungwoo
1997-01-01
Most software in the movie and broadcasting industries are still in analog film or tape format, which typically contains random noise that originated from film, CCD camera, and tape recording. The performance of the MPEG-2 encoder may be significantly degraded by the noise. It is also affected by the scene type that includes spatial and temporal activity. The statistical property of noise originating from camera and tape player is analyzed and the models for the two types of noise are developed. The relationship between the noise, the scene type, and encoder statistics of a number of MPEG-2 parameters such as motion vector magnitude, prediction error, and quant scale are discussed. This analysis is intended to be a tool for designing robust MPEG encoding algorithms such as preprocessing and rate control.
Supramolecular gelation of a polymeric prodrug for its encapsulation and sustained release.
Ma, Dong; Zhang, Li-Ming
2011-09-12
A polymeric prodrug, PEGylated indomethacin (MPEG-indo), was prepared and then used to interact with α-cyclodextrin (α-CD) in their aqueous mixed system. This process could lead to the formation of supramolecular hydrogel under mild conditions and simultaneous encapsulation of MPEG-indo in the hydrogel matrix. For the formed supramolecular hydrogel, its gelation kinetics, mechanical strength, shear-thinning behavior and thixotropic response were investigated with respect to the effects of MPEG-indo and α-CD amounts by dynamic and steady rheological tests. Meanwhile, the possibility of using this hydrogel matrix as injectable drug delivery system was also explored. By in vitro release and cell viability tests, it was found that the encapsulated MPEG-indo could exhibit a controlled and sustained release behavior as well as maintain its biological activity.
Performance comparison of leading image codecs: H.264/AVC Intra, JPEG2000, and Microsoft HD Photo
NASA Astrophysics Data System (ADS)
Tran, Trac D.; Liu, Lijie; Topiwala, Pankaj
2007-09-01
This paper provides a detailed rate-distortion performance comparison between JPEG2000, Microsoft HD Photo, and H.264/AVC High Profile 4:4:4 I-frame coding for high-resolution still images and high-definition (HD) 1080p video sequences. This work is an extension to our previous comparative study published in previous SPIE conferences [1, 2]. Here we further optimize all three codecs for compression performance. Coding simulations are performed on a set of large-format color images captured from mainstream digital cameras and 1080p HD video sequences commonly used for H.264/AVC standardization work. Overall, our experimental results show that all three codecs offer very similar coding performances at the high-quality, high-resolution setting. Differences tend to be data-dependent: JPEG2000 with the wavelet technology tends to be the best performer with smooth spatial data; H.264/AVC High-Profile with advanced spatial prediction modes tends to cope best with more complex visual content; Microsoft HD Photo tends to be the most consistent across the board. For the still-image data sets, JPEG2000 offers the best R-D performance gains (around 0.2 to 1 dB in peak signal-to-noise ratio) over H.264/AVC High-Profile intra coding and Microsoft HD Photo. For the 1080p video data set, all three codecs offer very similar coding performance. As in [1, 2], neither do we consider scalability nor complexity in this study (JPEG2000 is operating in non-scalable, but optimal performance mode).
Influence of audio triggered emotional attention on video perception
NASA Astrophysics Data System (ADS)
Torres, Freddy; Kalva, Hari
2014-02-01
Perceptual video coding methods attempt to improve compression efficiency by discarding visual information not perceived by end users. Most of the current approaches for perceptual video coding only use visual features ignoring the auditory component. Many psychophysical studies have demonstrated that auditory stimuli affects our visual perception. In this paper we present our study of audio triggered emotional attention and it's applicability to perceptual video coding. Experiments with movie clips show that the reaction time to detect video compression artifacts was longer when video was presented with the audio information. The results reported are statistically significant with p=0.024.
System Synchronizes Recordings from Separated Video Cameras
NASA Technical Reports Server (NTRS)
Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.
2009-01-01
A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.
Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder.
Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang
2018-07-01
Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed self-supervised video hashing (SSVH), which is able to capture the temporal nature of videos in an end-to-end learning to hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary auto-encoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world data sets show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the current best performance on the task of unsupervised video retrieval.
Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder
NASA Astrophysics Data System (ADS)
Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang
2018-07-01
Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed Self-Supervised Video Hashing (SSVH), that is able to capture the temporal nature of videos in an end-to-end learning-to-hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos; and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary autoencoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world datasets (FCVID and YFCC) show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the currently best performance on the task of unsupervised video retrieval.
77 FR 64514 - Sunshine Act Meeting; Open Commission Meeting; Wednesday, October 17, 2012
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-22
.../Video coverage of the meeting will be broadcast live with open captioning over the Internet from the FCC... format and alternative media, including large print/ type; digital disk; and audio and video tape. Best.... 2012-26060 Filed 10-18-12; 4:15 pm] BILLING CODE 6712-01-P ...
Fuel Droplet Burning During Droplet Combustion Experiment
NASA Technical Reports Server (NTRS)
2003-01-01
Fuel ignites and burns in the Droplet Combustion Experiment (DCE) on STS-94 on July 4 1997, MET:2/05:40 (approximate). The DCE was designed to investigate the fundamental combustion aspects of single, isolated droplets under different pressures and ambient oxygen concentrations for a range of droplet sizes varying between 2 and 5 mm. DCE used various fuels -- in drops ranging from 1 mm (0.04 inches) to 5 mm (0.2 inches) -- and mixtures of oxidizers and inert gases to learn more about the physics of combustion in the simplest burning configuration, a sphere. The experiment elapsed time is shown at the bottom of the composite image. The DCE principal investigator was Forman Williams, University of California, San Diego. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1R mission (STS-94, July 1-17 1997). Advanced combustion experiments will be a part of investigations plarned for the International Space Station. (1.4MB, 13-second MPEG, screen 320 x 240 pixels; downlinked video, higher quality not available)A still JPG composite of this movie is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300168.html.
Visual feature discrimination versus compression ratio for polygonal shape descriptors
NASA Astrophysics Data System (ADS)
Heuer, Joerg; Sanahuja, Francesc; Kaup, Andre
2000-10-01
In the last decade several methods for low level indexing of visual features appeared. Most often these were evaluated with respect to their discrimination power using measures like precision and recall. Accordingly, the targeted application was indexing of visual data within databases. During the standardization process of MPEG-7 the view on indexing of visual data changed, taking also communication aspects into account where coding efficiency is important. Even if the descriptors used for indexing are small compared to the size of images, it is recognized that there can be several descriptors linked to an image, characterizing different features and regions. Beside the importance of a small memory footprint for the transmission of the descriptor and the memory footprint in a database, eventually the search and filtering can be sped up by reducing the dimensionality of the descriptor if the metric of the matching can be adjusted. Based on a polygon shape descriptor presented for MPEG-7 this paper compares the discrimination power versus memory consumption of the descriptor. Different methods based on quantization are presented and their effect on the retrieval performance are measured. Finally an optimized computation of the descriptor is presented.
A novel multiple description scalable coding scheme for mobile wireless video transmission
NASA Astrophysics Data System (ADS)
Zheng, Haifeng; Yu, Lun; Chen, Chang Wen
2005-03-01
We proposed in this paper a novel multiple description scalable coding (MDSC) scheme based on in-band motion compensation temporal filtering (IBMCTF) technique in order to achieve high video coding performance and robust video transmission. The input video sequence is first split into equal-sized groups of frames (GOFs). Within a GOF, each frame is hierarchically decomposed by discrete wavelet transform. Since there is a direct relationship between wavelet coefficients and what they represent in the image content after wavelet decomposition, we are able to reorganize the spatial orientation trees to generate multiple bit-streams and employed SPIHT algorithm to achieve high coding efficiency. We have shown that multiple bit-stream transmission is very effective in combating error propagation in both Internet video streaming and mobile wireless video. Furthermore, we adopt the IBMCTF scheme to remove the redundancy for inter-frames along the temporal direction using motion compensated temporal filtering, thus high coding performance and flexible scalability can be provided in this scheme. In order to make compressed video resilient to channel error and to guarantee robust video transmission over mobile wireless channels, we add redundancy to each bit-stream and apply error concealment strategy for lost motion vectors. Unlike traditional multiple description schemes, the integration of these techniques enable us to generate more than two bit-streams that may be more appropriate for multiple antenna transmission of compressed video. Simulate results on standard video sequences have shown that the proposed scheme provides flexible tradeoff between coding efficiency and error resilience.
Wu, Jindan; Mao, Zhengwei; Gao, Changyou
2012-01-01
Cell migration is an important biological activity. Regulating the migration of vascular smooth muscle cells (VSMCs) is critical in tissue engineering and therapy of cardiovascular disease. In this work, methoxy poly(ethylene glycol) (mPEG) brushes of different molecular weight (Mw 2 kDa, 5 kDa and 10 kDa) and grafting mass (0-859 ng/cm(2)) were prepared on aldehyde-activated glass slides, and were characterized by X-ray photoelectron spectrometer (XPS) and quartz crystal microbalance with dissipation (QCM-d). Adhesion and migration processes of VSMCs were studied as a function of different mPEG Mw and grafting density. We found that these events were mainly regulated by the grafting mass of mPEG regardless of mPEG Mw and grafting density. The VSMCs migrated on the surfaces randomly without a preferential direction. Their migration rates increased initially and then decreased along with the increase of mPEG grafting mass. The fastest rates (~24 μm/h) appeared on the mPEG brushes with grafting mass of 300-500 ng/cm(2) depending on the Mw. Cell adhesion strength, arrangement of cytoskeleton, and gene and protein expression levels of adhesion related proteins were studied to unveil the intrinsic mechanism. It was found that the cell-substrate interaction controlled the cell mobility, and the highest migration rate was achieved on the surfaces with appropriate adhesion force. Copyright © 2011 Elsevier Ltd. All rights reserved.
Expressing Youth Voice through Video Games and Coding
ERIC Educational Resources Information Center
Martin, Crystle
2017-01-01
A growing body of research focuses on the impact of video games and coding on learning. The research often elevates learning the technical skills associated with video games and coding or the importance of problem solving and computational thinking, which are, of course, necessary and relevant. However, the literature less often explores how young…
Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction
Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin
2016-01-01
High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems. PMID:27814367
Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction.
Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin
2016-01-01
High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems.
Application discussion of source coding standard in voyage data recorder
NASA Astrophysics Data System (ADS)
Zong, Yonggang; Zhao, Xiandong
2018-04-01
This paper analyzes the disadvantages of the audio and video compression coding technology used by Voyage Data Recorder, and combines the improvement of performance of audio and video acquisition equipment. The thinking of improving the audio and video compression coding technology of the voyage data recorder is proposed, and the feasibility of adopting the new compression coding technology is analyzed from economy and technology two aspects.
On mobile wireless ad hoc IP video transports
NASA Astrophysics Data System (ADS)
Kazantzidis, Matheos
2006-05-01
Multimedia transports in wireless, ad-hoc, multi-hop or mobile networks must be capable of obtaining information about the network and adaptively tune sending and encoding parameters to the network response. Obtaining meaningful metrics to guide a stable congestion control mechanism in the transport (i.e. passive, simple, end-to-end and network technology independent) is a complex problem. Equally difficult is obtaining a reliable QoS metrics that agrees with user perception in a client/server or distributed environment. Existing metrics, objective or subjective, are commonly used after or before to test or report on a transmission and require access to both original and transmitted frames. In this paper, we propose that an efficient and successful video delivery and the optimization of overall network QoS requires innovation in a) a direct measurement of available and bottleneck capacity for its congestion control and b) a meaningful subjective QoS metric that is dynamically reported to video sender. Once these are in place, a binomial -stable, fair and TCP friendly- algorithm can be used to determine the sending rate and other packet video parameters. An adaptive mpeg codec can then continually test and fit its parameters and temporal-spatial data-error control balance using the perceived QoS dynamic feedback. We suggest a new measurement based on a packet dispersion technique that is independent of underlying network mechanisms. We then present a binomial control based on direct measurements. We implement a QoS metric that is known to agree with user perception (MPQM) in a client/server, distributed environment by using predetermined table lookups and characterization of video content.
NASA Technical Reports Server (NTRS)
Ivancic, William D.; Frantz, Brian D.; Spells, Marcus J.
1998-01-01
Asynchronous transfer mode (ATM) quality of service (QoS) experiments were performed using MPEG-2 (ATM application layer 5, AAL5) over ATM over an emulated satellite link. The purpose of these experiments was to determine the free-space link quality necessary to transmit high-quality multimedia information by using the ATM protocol. The detailed test plan and test configuration are described herein as are the test results. MPEG-2 transport streams were baselined in an errored environment, followed by a series of tests using, MPEG-2 over ATM. Errors were created both digitally as well as in an IF link by using a satellite modem and commercial gaussian noise test set for two different MPEG-2 decoder implementations. The results show that ITU-T Recommendation 1.356 Class 1, stringent ATM applications will require better link quality than currently specified; in particular, cell loss ratios of better than 1.0 x 10(exp -8) and cell error ratios of better than 1.0 x 10(exp -7) are needed. These tests were conducted at the NASA Lewis Research Center in support of satellite-ATM interoperability research.
MPEG content summarization based on compressed domain feature analysis
NASA Astrophysics Data System (ADS)
Sugano, Masaru; Nakajima, Yasuyuki; Yanagihara, Hiromasa
2003-11-01
This paper addresses automatic summarization of MPEG audiovisual content on compressed domain. By analyzing semantically important low-level and mid-level audiovisual features, our method universally summarizes the MPEG-1/-2 contents in the form of digest or highlight. The former is a shortened version of an original, while the latter is an aggregation of important or interesting events. In our proposal, first, the incoming MPEG stream is segmented into shots and the above features are derived from each shot. Then the features are adaptively evaluated in an integrated manner, and finally the qualified shots are aggregated into a summary. Since all the processes are performed completely on compressed domain, summarization is achieved at very low computational cost. The experimental results show that news highlights and sports highlights in TV baseball games can be successfully extracted according to simple shot transition models. As for digest extraction, subjective evaluation proves that meaningful shots are extracted from content without a priori knowledge, even if it contains multiple genres of programs. Our method also has the advantage of generating an MPEG-7 based description such as summary and audiovisual segments in the course of summarization.
Motion compensated shape error concealment.
Schuster, Guido M; Katsaggelos, Aggelos K
2006-02-01
The introduction of Video Objects (VOs) is one of the innovations of MPEG-4. The alpha-plane of a VO defines its shape at a given instance in time and hence determines the boundary of its texture. In packet-based networks, shape, motion, and texture are subject to loss. While there has been considerable attention paid to the concealment of texture and motion errors, little has been done in the field of shape error concealment. In this paper we propose a post-processing shape error concealment technique that uses the motion compensated boundary information of the previously received alpha-plane. The proposed approach is based on matching received boundary segments in the current frame to the boundary in the previous frame. This matching is achieved by finding a maximally smooth motion vector field. After the current boundary segments are matched to the previous boundary, the missing boundary pieces are reconstructed by motion compensation. Experimental results demonstrating the performance of the proposed motion compensated shape error concealment method, and comparing it with the previously proposed weighted side matching method are presented.
Wavelet-based audio embedding and audio/video compression
NASA Astrophysics Data System (ADS)
Mendenhall, Michael J.; Claypoole, Roger L., Jr.
2001-12-01
Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.
Subjective quality evaluation of low-bit-rate video
NASA Astrophysics Data System (ADS)
Masry, Mark; Hemami, Sheila S.; Osberger, Wilfried M.; Rohaly, Ann M.
2001-06-01
A subjective quality evaluation was performed to qualify vie4wre responses to visual defects that appear in low bit rate video at full and reduced frame rates. The stimuli were eight sequences compressed by three motion compensated encoders - Sorenson Video, H.263+ and a Wavelet based coder - operating at five bit/frame rate combinations. The stimulus sequences exhibited obvious coding artifacts whose nature differed across the three coders. The subjective evaluation was performed using the Single Stimulus Continuos Quality Evaluation method of UTI-R Rec. BT.500-8. Viewers watched concatenated coded test sequences and continuously registered the perceived quality using a slider device. Data form 19 viewers was colleted. An analysis of their responses to the presence of various artifacts across the range of possible coding conditions and content is presented. The effects of blockiness and blurriness on perceived quality are examined. The effects of changes in frame rate on perceived quality are found to be related to the nature of the motion in the sequence.
Dynamic quality of service differentiation using fixed code weight in optical CDMA networks
NASA Astrophysics Data System (ADS)
Kakaee, Majid H.; Essa, Shawnim I.; Abd, Thanaa H.; Seyedzadeh, Saleh
2015-11-01
The emergence of network-driven applications, such as internet, video conferencing, and online gaming, brings in the need for a network the environments with capability of providing diverse Quality of Services (QoS). In this paper, a new code family of novel spreading sequences, called a Multi-Service (MS) code, has been constructed to support multiple services in Optical- Code Division Multiple Access (CDMA) system. The proposed method uses fixed weight for all services, however reducing the interfering codewords for the users requiring higher QoS. The performance of the proposed code is demonstrated using mathematical analysis. It shown that the total number of served users with satisfactory BER of 10-9 using NB=2 is 82, while they are only 36 and 10 when NB=3 and 4 respectively. The developed MS code is compared with variable-weight codes such as Variable Weight-Khazani Syed (VW-KS) and Multi-Weight-Random Diagonal (MW-RD). Different numbers of basic users (NB) are used to support triple-play services (audio, data and video) with different QoS requirements. Furthermore, reference to the BER of 10-12, 10-9, and 10-3 for video, data and audio, respectively, the system can support up to 45 total users. Hence, results show that the technique can clearly provide a relative QoS differentiation with lower value of basic users can support larger number of subscribers as well as better performance in terms of acceptable BER of 10-9 at fixed code weight.
Mixture block coding with progressive transmission in packet video. Appendix 1: Item 2. M.S. Thesis
NASA Technical Reports Server (NTRS)
Chen, Yun-Chung
1989-01-01
Video transmission will become an important part of future multimedia communication because of dramatically increasing user demand for video, and rapid evolution of coding algorithm and VLSI technology. Video transmission will be part of the broadband-integrated services digital network (B-ISDN). Asynchronous transfer mode (ATM) is a viable candidate for implementation of B-ISDN due to its inherent flexibility, service independency, and high performance. According to the characteristics of ATM, the information has to be coded into discrete cells which travel independently in the packet switching network. A practical realization of an ATM video codec called Mixture Block Coding with Progressive Transmission (MBCPT) is presented. This variable bit rate coding algorithm shows how a constant quality performance can be obtained according to user demand. Interactions between codec and network are emphasized including packetization, service synchronization, flow control, and error recovery. Finally, some simulation results based on MBCPT coding with error recovery are presented.
MPEG-7: standard metadata for multimedia content
NASA Astrophysics Data System (ADS)
Chang, Wo
2005-08-01
The eXtensible Markup Language (XML) metadata technology of describing media contents has emerged as a dominant mode of making media searchable both for human and machine consumptions. To realize this premise, many online Web applications are pushing this concept to its fullest potential. However, a good metadata model does require a robust standardization effort so that the metadata content and its structure can reach its maximum usage between various applications. An effective media content description technology should also use standard metadata structures especially when dealing with various multimedia contents. A new metadata technology called MPEG-7 content description has merged from the ISO MPEG standards body with the charter of defining standard metadata to describe audiovisual content. This paper will give an overview of MPEG-7 technology and what impact it can bring forth to the next generation of multimedia indexing and retrieval applications.
Fast depth decision for HEVC inter prediction based on spatial and temporal correlation
NASA Astrophysics Data System (ADS)
Chen, Gaoxing; Liu, Zhenyu; Ikenaga, Takeshi
2016-07-01
High efficiency video coding (HEVC) is a video compression standard that outperforms the predecessor H.264/AVC by doubling the compression efficiency. To enhance the compression accuracy, the partition sizes ranging is from 4x4 to 64x64 in HEVC. However, the manifold partition sizes dramatically increase the encoding complexity. This paper proposes a fast depth decision based on spatial and temporal correlation. Spatial correlation utilize the code tree unit (CTU) Splitting information and temporal correlation utilize the motion vector predictor represented CTU in inter prediction to determine the maximum depth in each CTU. Experimental results show that the proposed method saves about 29.1% of the original processing time with 0.9% of BD-bitrate increase on average.
Gao, Mingming; Tian, Hong; Ma, Chen; Gao, Xiangdong; Guo, Wei; Yao, Wenbing
2010-09-01
Glucagon-like peptide-1 (GLP-1) is attracting increasing interest on account of its prominent benefits in type 2 diabetes. However, its clinical application is limited because of short biological half-life. This study was designed to produce a C-terminal site-specific PEGylated analog of cysteine-mutated GLP-1 (cGLP-1) to prolong its action. The gene of cGLP-1 was inserted into pET32a to construct a thioredoxinA fusion protein. After expression in BL21 (DE3) strain, the fusion protein was purified with Ni-affinity chromatography and then was PEGylated with methoxy-polyethylene glycol-maleimide (mPEG(10K)-MAL). The PEGylated fusion protein was purified with anion exchange chromatography and then was cleaved by enterokinase. The digested product was further purified with reverse-phase chromatography. Finally, 8.7 mg mPEG(10K)-cGLP-1 with a purity of up to 98% was obtained from the original 500 ml culture. The circular dichroism spectra indicated that mPEG(10K)-cGLP-1 maintained the secondary structure of native GLP-1. As compared with that of native GLP-1, the plasma glucose lowering activity of mPEG(10K)-cGLP-1 was significantly extended. These results suggest that our method will be useful in obtaining a large quantity of mPEG(10K)-cGLP-1 for further study and mPEG(10K)-cGLP-1 might find a role in the therapy of type 2 diabetes through C-terminal site-specific PEGylation.
"Physics on Stage" Festival Video Now Available
NASA Astrophysics Data System (ADS)
2001-01-01
ESO Video Clip 01/01 is issued on the web in conjunction with the release of an 18-min documentary video from the Science Festival of the "Physics On Stage" programme. This unique event took place during November 6-11, 2000, on the CERN premises at the French-Swiss border near Geneva, and formed part of the European Science and Technology Week 2000, an initiative by the European Commission to raise the public awareness of science in Europe. Physics On Stage and the Science Festival were jointly organised by CERN, ESA and ESO, in collaboration with the European Physical Society (EPS) and the European Association for Astronomy Education (EAAE) and national organisations in about 25 European countries. During this final phase of the yearlong Physics On Stage programme, more than 500 physics teachers, government officials and media representatives gathered at CERN to discuss different aspects of physics education. The meeting was particular timely in view of the current decline of interest in physics and technology by Europe's citizens, especially schoolchildren. It included spectacular demonstrations of new educational materials and methods. An 18-min video is now available that documents this event. It conveys the great enthusiasm of the many participants who spent an extremely fruitful week, meeting and exchanging information with colleagues from all over the continent. It shows the various types of activities that took place, from the central "fair" with national and organisational booths to the exciting performances and other dramatic presentations. Based of the outcome of 13 workshops that focussed on different subject matters, a series of very useful recommendations was passed at the final session. The Science Festival was also visited by several high-ranking officials, including the European Commissioner for Research, Phillipe Busquin. Full reports from the Festival will soon become available from the International Steering Committee..More information is available on the "Physics on Stage" webpages at CERN , ESA and ESO ). Note also the brief account published in the December 2000 issue of the ESO Messenger. The present video clip is available in four versions: two MPEG files and two streamer-versions of different sizes; the latter require RealPlayer software. Video Clip 01/01 may be freely reproduced. Tapes of this video clip and the 18-min video, suitable for transmission and in full professional quality (Betacam, etc.), are available for broadcasters upon request ; please contact the ESO EPR Department for more details. Most of the ESO PR Video Clips at the ESO website provide "animated" illustrations of the ongoing work and events at the European Southern Observatory. The most recent clip was: ESO PR Video Clip 06/00 about Fourth Light at Paranal! (4 September 2000) . General information is available on the web about ESO videos.
Apply network coding for H.264/SVC multicasting
NASA Astrophysics Data System (ADS)
Wang, Hui; Kuo, C.-C. Jay
2008-08-01
In a packet erasure network environment, video streaming benefits from error control in two ways to achieve graceful degradation. The first approach is application-level (or the link-level) forward error-correction (FEC) to provide erasure protection. The second error control approach is error concealment at the decoder end to compensate lost packets. A large amount of research work has been done in the above two areas. More recently, network coding (NC) techniques have been proposed for efficient data multicast over networks. It was shown in our previous work that multicast video streaming benefits from NC for its throughput improvement. An algebraic model is given to analyze the performance in this work. By exploiting the linear combination of video packets along nodes in a network and the SVC video format, the system achieves path diversity automatically and enables efficient video delivery to heterogeneous receivers in packet erasure channels. The application of network coding can protect video packets against the erasure network environment. However, the rank defficiency problem of random linear network coding makes the error concealment inefficiently. It is shown by computer simulation that the proposed NC video multicast scheme enables heterogenous receiving according to their capacity constraints. But it needs special designing to improve the video transmission performance when applying network coding.
NASA Astrophysics Data System (ADS)
Boumehrez, Farouk; Brai, Radhia; Doghmane, Noureddine; Mansouri, Khaled
2018-01-01
Recently, video streaming has attracted much attention and interest due to its capability to process and transmit large data. We propose a quality of experience (QoE) model relying on high efficiency video coding (HEVC) encoder adaptation scheme, in turn based on the multiple description coding (MDC) for video streaming. The main contributions of the paper are (1) a performance evaluation of the new and emerging video coding standard HEVC/H.265, which is based on the variation of quantization parameter (QP) values depending on different video contents to deduce their influence on the sequence to be transmitted, (2) QoE support multimedia applications in wireless networks are investigated, so we inspect the packet loss impact on the QoE of transmitted video sequences, (3) HEVC encoder parameter adaptation scheme based on MDC is modeled with the encoder parameter and objective QoE model. A comparative study revealed that the proposed MDC approach is effective for improving the transmission with a peak signal-to-noise ratio (PSNR) gain of about 2 to 3 dB. Results show that a good choice of QP value can compensate for transmission channel effects and improve received video quality, although HEVC/H.265 is also sensitive to packet loss. The obtained results show the efficiency of our proposed method in terms of PSNR and mean-opinion-score.
NASA Astrophysics Data System (ADS)
Pei, Yong; Modestino, James W.
2004-12-01
Digital video delivered over wired-to-wireless networks is expected to suffer quality degradation from both packet loss and bit errors in the payload. In this paper, the quality degradation due to packet loss and bit errors in the payload are quantitatively evaluated and their effects are assessed. We propose the use of a concatenated forward error correction (FEC) coding scheme employing Reed-Solomon (RS) codes and rate-compatible punctured convolutional (RCPC) codes to protect the video data from packet loss and bit errors, respectively. Furthermore, the performance of a joint source-channel coding (JSCC) approach employing this concatenated FEC coding scheme for video transmission is studied. Finally, we describe an improved end-to-end architecture using an edge proxy in a mobile support station to implement differential error protection for the corresponding channel impairments expected on the two networks. Results indicate that with an appropriate JSCC approach and the use of an edge proxy, FEC-based error-control techniques together with passive error-recovery techniques can significantly improve the effective video throughput and lead to acceptable video delivery quality over time-varying heterogeneous wired-to-wireless IP networks.
YouTube as a source of information on skin bleaching: a content analysis.
Basch, C H; Brown, A A; Fullwood, M D; Clark, A; Fung, I C-H; Yin, J
2018-06-01
Skin bleaching is a common, yet potentially harmful body modification practice. To describe the characteristics of the most widely viewed YouTube™ videos related to skin bleaching. The search term 'skin bleaching' was used to identify the 100 most popular English-language YouTube videos relating to the topic. Both descriptive and specific information were noted. Among the 100 manually coded skin-bleaching YouTube videos in English, there were 21 consumer-created videos, 45 internet-based news videos, 30 television news videos and 4 professional videos. Excluding the 4 professional videos, we limited our content categorization and regression analysis to 96 videos. Approximately 93% (89/96) of the most widely viewed videos mentioned changing how you look and 74% (71/96) focused on bleaching the whole body. Of the 96 videos, 63 (66%) of videos showed/mentioned a transformation. Only about 14% (13/96) mentioned that skin bleaching is unsafe. The likelihood of a video selling a skin bleaching product was 17 times higher in internet videos compared with consumer videos (OR = 17.00, 95% CI 4.58-63.09, P < 0.001). Consumer-generated videos were about seven times more likely to mention making bleaching products at home compared with internet-based news videos (OR = 6.86, 95% CI 1.77-26.59, P < 0.01). The most viewed YouTube video on skin bleaching was uploaded by an internet source. Videos made by television sources mentioned more information about skin bleaching being unsafe, while consumer-generated videos focused more on making skin-bleaching products at home. © 2017 British Association of Dermatologists.
Real-time transmission of digital video using variable-length coding
NASA Technical Reports Server (NTRS)
Bizon, Thomas P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.
1993-01-01
Huffman coding is a variable-length lossless compression technique where data with a high probability of occurrence is represented with short codewords, while 'not-so-likely' data is assigned longer codewords. Compression is achieved when the high-probability levels occur so frequently that their benefit outweighs any penalty paid when a less likely input occurs. One instance where Huffman coding is extremely effective occurs when data is highly predictable and differential coding can be applied (as with a digital video signal). For that reason, it is desirable to apply this compression technique to digital video transmission; however, special care must be taken in order to implement a communication protocol utilizing Huffman coding. This paper addresses several of the issues relating to the real-time transmission of Huffman-coded digital video over a constant-rate serial channel. Topics discussed include data rate conversion (from variable to a fixed rate), efficient data buffering, channel coding, recovery from communication errors, decoder synchronization, and decoder architectures. A description of the hardware developed to execute Huffman coding and serial transmission is also included. Although this paper focuses on matters relating to Huffman-coded digital video, the techniques discussed can easily be generalized for a variety of applications which require transmission of variable-length data.
MPEG-7-based description infrastructure for an audiovisual content analysis and retrieval system
NASA Astrophysics Data System (ADS)
Bailer, Werner; Schallauer, Peter; Hausenblas, Michael; Thallinger, Georg
2005-01-01
We present a case study of establishing a description infrastructure for an audiovisual content-analysis and retrieval system. The description infrastructure consists of an internal metadata model and access tool for using it. Based on an analysis of requirements, we have selected, out of a set of candidates, MPEG-7 as the basis of our metadata model. The openness and generality of MPEG-7 allow using it in broad range of applications, but increase complexity and hinder interoperability. Profiling has been proposed as a solution, with the focus on selecting and constraining description tools. Semantic constraints are currently only described in textual form. Conformance in terms of semantics can thus not be evaluated automatically and mappings between different profiles can only be defined manually. As a solution, we propose an approach to formalize the semantic constraints of an MPEG-7 profile using a formal vocabulary expressed in OWL, which allows automated processing of semantic constraints. We have defined the Detailed Audiovisual Profile as the profile to be used in our metadata model and we show how some of the semantic constraints of this profile can be formulated using ontologies. To work practically with the metadata model, we have implemented a MPEG-7 library and a client/server document access infrastructure.
Significance of MPEG-7 textural features for improved mass detection in mammography.
Eltonsy, Nevine H; Tourassi, Georgia D; Fadeev, Aleksey; Elmaghraby, Adel S
2006-01-01
The purpose of the study is to investigate the significance of MPEG-7 textural features for improving the detection of masses in screening mammograms. The detection scheme was originally based on morphological directional neighborhood features extracted from mammographic regions of interest (ROIs). Receiver Operating Characteristics (ROC) was performed to evaluate the performance of each set of features independently and merged into a back-propagation artificial neural network (BPANN) using the leave-one-out sampling scheme (LOOSS). The study was based on a database of 668 mammographic ROIs (340 depicting cancer regions and 328 depicting normal parenchyma). Overall, the ROC area index of the BPANN using the directional morphological features was Az=0.85+/-0.01. The MPEG-7 edge histogram descriptor-based BPNN showed an ROC area index of Az=0.71+/-0.01 while homogeneous textural descriptors using 30 and 120 channels helped the BPNN achieve similar ROC area indexes of Az=0.882+/-0.02 and Az=0.877+/-0.01 respectively. After merging the MPEG-7 homogeneous textural features with the directional neighborhood features the performance of the BPANN increased providing an ROC area index of Az=0.91+/-0.01. MPEG-7 homogeneous textural descriptor significantly improved the morphology-based detection scheme.
The emerging High Efficiency Video Coding standard (HEVC)
NASA Astrophysics Data System (ADS)
Raja, Gulistan; Khan, Awais
2013-12-01
High definition video (HDV) is becoming popular day by day. This paper describes the performance analysis of latest upcoming video standard known as High Efficiency Video Coding (HEVC). HEVC is designed to fulfil all the requirements for future high definition videos. In this paper, three configurations (intra only, low delay and random access) of HEVC are analyzed using various 480p, 720p and 1080p high definition test video sequences. Simulation results show the superior objective and subjective quality of HEVC.
Spatial resampling of IDR frames for low bitrate video coding with HEVC
NASA Astrophysics Data System (ADS)
Hosking, Brett; Agrafiotis, Dimitris; Bull, David; Easton, Nick
2015-03-01
As the demand for higher quality and higher resolution video increases, many applications fail to meet this demand due to low bandwidth restrictions. One factor contributing to this problem is the high bitrate requirement of the intra-coded Instantaneous Decoding Refresh (IDR) frames featuring in all video coding standards. Frequent coding of IDR frames is essential for error resilience in order to prevent the occurrence of error propagation. However, as each one consumes a huge portion of the available bitrate, the quality of future coded frames is hindered by high levels of compression. This work presents a new technique, known as Spatial Resampling of IDR Frames (SRIF), and shows how it can increase the rate distortion performance by providing a higher and more consistent level of video quality at low bitrates.
Wu, Shou-Cheng; Chen, Yu-Jen; Lin, Yi-Jan; Wu, Tung-Ho; Wang, Yun-Ming
2013-11-27
In search of a unique and reliable contrast agent targeting pancreatic adenocarcinoma, new multifunctional nanoparticles (MnMEIO-silane-NH2-(MUC4)-mPEG NPs) were successfully developed in this study. Mucin4-expression levels were determined through different imaging studies in a panel of pancreatic tumor cells (HPAC, BxPC-3, and Panc-1) both in vitro and in vivo studies. The in vitro T2-weighted MR imaging study in HPAC and Panc-1 tumor cells treated with NPs showed -89.1 ± 5.7% and -0.9 ± 0.2% contrast enhancement, whereas in in vivo study, it is found to be -81.5 ± 4.5% versus -19.6 ± 5.2% (24 h postinjection, 7.0 T), respectively. The T2-weighted MR and optical imaging studies revealed that the novel contrast agent can specifically and effectively target to mucin4-expressing tumors in nude mice. Hence, it is suggested that MnMEIO-silane-NH2-(MUC4)-mPEG NPs are able to provide an efficient and targeted delivery of MUC4 antibodies to mucin4-expressing pancreatic tumors.
Resource allocation for error resilient video coding over AWGN using optimization approach.
An, Cheolhong; Nguyen, Truong Q
2008-12-01
The number of slices for error resilient video coding is jointly optimized with 802.11a-like media access control and the physical layers with automatic repeat request and rate compatible punctured convolutional code over additive white gaussian noise channel as well as channel times allocation for time division multiple access. For error resilient video coding, the relation between the number of slices and coding efficiency is analyzed and formulated as a mathematical model. It is applied for the joint optimization problem, and the problem is solved by a convex optimization method such as the primal-dual decomposition method. We compare the performance of a video communication system which uses the optimal number of slices with one that codes a picture as one slice. From numerical examples, end-to-end distortion of utility functions can be significantly reduced with the optimal slices of a picture especially at low signal-to-noise ratio.
H.264 Layered Coded Video over Wireless Networks: Channel Coding and Modulation Constraints
NASA Astrophysics Data System (ADS)
Ghandi, M. M.; Barmada, B.; Jones, E. V.; Ghanbari, M.
2006-12-01
This paper considers the prioritised transmission of H.264 layered coded video over wireless channels. For appropriate protection of video data, methods such as prioritised forward error correction coding (FEC) or hierarchical quadrature amplitude modulation (HQAM) can be employed, but each imposes system constraints. FEC provides good protection but at the price of a high overhead and complexity. HQAM is less complex and does not introduce any overhead, but permits only fixed data ratios between the priority layers. Such constraints are analysed and practical solutions are proposed for layered transmission of data-partitioned and SNR-scalable coded video where combinations of HQAM and FEC are used to exploit the advantages of both coding methods. Simulation results show that the flexibility of SNR scalability and absence of picture drift imply that SNR scalability as modelled is superior to data partitioning in such applications.
Wang, Tiechuang; Yin, Xiaodong; Lu, Yaping; Shan, Weiguang; Xiong, Subin
2012-01-01
Emodin is a multifunctional Chinese traditional medicine with poor water solubility. D-α-tocopheryl polyethylene glycol 1000 succinate (TPGS) is a pegylated vitamin E derivate. In this study, a novel liposomal-emodin-conjugating TPGS was formulated and compared with methoxypolyethyleneglycol 2000-derivatized distearoyl-phosphatidylethanolamine (mPEG2000–DSPE) liposomal emodin. TPGS improved the encapsulation efficiency and stability of emodin egg phosphatidylcholine/cholesterol liposomes. A high encapsulation efficiency of 95.2% ± 3.0%, particle size of 121.1 ± 44.9 nm, spherical ultrastructure, and sustained in vitro release of TPGS liposomal emodin were observed; these were similar to mPEG2000–DSPE liposomes. Only the zeta potential of −13.1 ± 2.7 mV was significantly different to that for mPEG2000–DSPE liposomes. Compared to mPEG2000–DSPE liposomes, TPGS liposomes improved the cytotoxicity of emodin on leukemia cells by regulating the protein levels of myeloid cell leukemia 1 (Mcl-1), B-cell lymphoma-2 (Bcl-2) and Bcl-2-associated X protein, which was further enhanced by transferrin. TPGS liposomes prolonged the circulation time of emodin in the blood, with the area under the concentration–time curve (AUC) 1.7 times larger than for free emodin and 0.91 times larger than for mPEG2000–DSPE liposomes. In addition, TPGS liposomes showed higher AUC for emodin in the lung and kidney than for mPEG2000–DSPE liposomes, and both liposomes elevated the amount of emodin in the heart. Overall, TPGS is a pegylated agent that could potentially be used to compose a stable liposomal emodin with enhanced therapeutics. PMID:22661889
Bringing "Scientific Expeditions" Into the Schools
NASA Technical Reports Server (NTRS)
Watson, Val; Lasinski, T. A. (Technical Monitor)
1995-01-01
Two new technologies, the FASTexpedition and Remote FAST, have been developed that provide remote, 3D, high resolution, dynamic, interactive viewing of scientific data (such as simulations or measurements of fluid dynamics). The FASTexpedition permits one to access scientific data from the World Wide Web, take guided expeditions through the data, and continue with self controlled expeditions through the data. Remote FAST permits collaborators at remote sites to simultaneously view an analysis of scientific data being controlled by one of the collaborators. Control can be transferred between sites. These technologies are now being used for remote collaboration in joint university, industry, and NASA projects in computational fluid dynamics (CFD) and wind tunnel testing. Also, NASA Ames Research Center has initiated a project to make scientific data and guided expeditions through the data available as FASTexpeditions on the World Wide Web for educational purposes. Previously, remote visualiZation of dynamic data was done using video format (transmitting pixel information) such as video conferencing or MPEG movies on the Internet. The concept for this new technology is to send the raw data (e.g., grids, vectors, and scalars) along with viewing scripts over the Internet and have the pixels generated by a visualization tool running on the viewer's local workstation. The visualization tool that is currently used is FAST (Flow Analysis Software Toolkit). The advantages of this new technology over using video format are: 1. The visual is much higher in resolution (1280xl024 pixels with 24 bits of color) than typical video format transmitted over the network. 2. The form of the visualization can be controlled interactively (because the viewer is interactively controlling the visualization tool running on his workstation). 3. A rich variety of guided expeditions through the data can be included easily. 4. A capability is provided for other sites to see a visual analysis of one site as the analysis is interactively performed. Control of the analysis can be passed from site to site. 5. The scenes can be viewed in 3D using stereo vision. 6. The network bandwidth used for the visualization using this new technology is much smaller than when using video format. (The measured peak bandwidth used was 1 Kbit/sec whereas the measured bandwidth for a small video picture was 500 Kbits/sec.)
Fast 3D Net Expeditions: Tools for Effective Scientific Collaboration on the World Wide Web
NASA Technical Reports Server (NTRS)
Watson, Val; Chancellor, Marisa K. (Technical Monitor)
1996-01-01
Two new technologies, the FASTexpedition and Remote FAST, have been developed that provide remote, 3D (three dimensional), high resolution, dynamic, interactive viewing of scientific data. The FASTexpedition permits one to access scientific data from the World Wide Web, take guided expeditions through the data, and continue with self controlled expeditions through the data. Remote FAST permits collaborators at remote sites to simultaneously view an analysis of scientific data being controlled by one of the collaborators. Control can be transferred between sites. These technologies are now being used for remote collaboration in joint university, industry, and NASA projects. Also, NASA Ames Research Center has initiated a project to make scientific data and guided expeditions through the data available as FASTexpeditions on the World Wide Web for educational purposes. Previously, remote visualization of dynamic data was done using video format (transmitting pixel information) such as video conferencing or MPEG (Motion Picture Expert Group) movies on the Internet. The concept for this new technology is to send the raw data (e.g., grids, vectors, and scalars) along with viewing scripts over the Internet and have the pixels generated by a visualization tool running on the viewers local workstation. The visualization tool that is currently used is FAST (Flow Analysis Software Toolkit). The advantages of this new technology over using video format are: (1) The visual is much higher in resolution (1280x1024 pixels with 24 bits of color) than typical video format transmitted over the network. (2) The form of the visualization can be controlled interactively (because the viewer is interactively controlling the visualization tool running on his workstation). (3) A rich variety of guided expeditions through the data can be included easily. (4) A capability is provided for other sites to see a visual analysis of one site as the analysis is interactively performed. Control of the analysis can be passed from site to site. (5) The scenes can be viewed in 3D using stereo vision. (6) The network bandwidth for the visualization using this new technology is much smaller than when using video format. (The measured peak bandwidth used was 1 Kbit/sec whereas the measured bandwidth for a small video picture was 500 Kbits/sec.) This talk will illustrate the use of these new technologies and present a proposal for using these technologies to improve science education.
Video coding for 3D-HEVC based on saliency information
NASA Astrophysics Data System (ADS)
Yu, Fang; An, Ping; Yang, Chao; You, Zhixiang; Shen, Liquan
2016-11-01
As an extension of High Efficiency Video Coding ( HEVC), 3D-HEVC has been widely researched under the impetus of the new generation coding standard in recent years. Compared with H.264/AVC, its compression efficiency is doubled while keeping the same video quality. However, its higher encoding complexity and longer encoding time are not negligible. To reduce the computational complexity and guarantee the subjective quality of virtual views, this paper presents a novel video coding method for 3D-HEVC based on the saliency informat ion which is an important part of Human Visual System (HVS). First of all, the relationship between the current coding unit and its adjacent units is used to adjust the maximum depth of each largest coding unit (LCU) and determine the SKIP mode reasonably. Then, according to the saliency informat ion of each frame image, the texture and its corresponding depth map will be divided into three regions, that is, salient area, middle area and non-salient area. Afterwards, d ifferent quantization parameters will be assigned to different regions to conduct low complexity coding. Finally, the compressed video will generate new view point videos through the renderer tool. As shown in our experiments, the proposed method saves more bit rate than other approaches and achieves up to highest 38% encoding time reduction without subjective quality loss in compression or rendering.
Detection of illegal transfer of videos over the Internet
NASA Astrophysics Data System (ADS)
Chaisorn, Lekha; Sainui, Janya; Manders, Corey
2010-07-01
In this paper, a method for detecting infringements or modifications of a video in real-time is proposed. The method first segments a video stream into shots, after which it extracts some reference frames as keyframes. This process is performed employing a Singular Value Decomposition (SVD) technique developed in this work. Next, for each input video (represented by its keyframes), ordinal-based signature and SIFT (Scale Invariant Feature Transform) descriptors are generated. The ordinal-based method employs a two-level bitmap indexing scheme to construct the index for each video signature. The first level clusters all input keyframes into k clusters while the second level converts the ordinal-based signatures into bitmap vectors. On the other hand, the SIFT-based method directly uses the descriptors as the index. Given a suspect video (being streamed or transferred on the Internet), we generate the signature (ordinal and SIFT descriptors) then we compute similarity between its signature and those signatures in the database based on ordinal signature and SIFT descriptors separately. For similarity measure, besides the Euclidean distance, Boolean operators are also utilized during the matching process. We have tested our system by performing several experiments on 50 videos (each about 1/2 hour in duration) obtained from the TRECVID 2006 data set. For experiments set up, we refer to the conditions provided by TRECVID 2009 on "Content-based copy detection" task. In addition, we also refer to the requirements issued in the call for proposals by MPEG standard on the similar task. Initial result shows that our framework is effective and robust. As compared to our previous work, on top of the achievement we obtained by reducing the storage space and time taken in the ordinal based method, by introducing the SIFT features, we could achieve an overall accuracy in F1 measure of about 96% (improved about 8%).
Patient-Physician Communication About Code Status Preferences: A Randomized Controlled Trial
Rhondali, Wadih; Perez-Cruz, Pedro; Hui, David; Chisholm, Gary B.; Dalal, Shalini; Baile, Walter; Chittenden, Eva; Bruera, Eduardo
2013-01-01
Purpose Code status discussions are important in cancer care. The best modality for such discussions has not been established. Our objective was to determine the impact of a physician ending a code status discussion with a question (autonomy approach) versus a recommendation (beneficence approach) on patients' do-not-resuscitate (DNR) preference. Methods Patients in a supportive care clinic watched two videos showing a physician-patient discussion regarding code status. Both videos were identical except for the ending: one ended with the physician asking for the patient's code status preference and the other with the physician recommending DNR. Patients were randomly assigned to watch the videos in different sequences. The main outcome was the proportion of patients choosing DNR for the video patient. Results 78 patients completed the study. 74% chose DNR after the question video, 73% after the recommendation video. Median physician compassion score was very high and not different for both videos. 30/30 patients who had chosen DNR for themselves and 30/48 patients who had not chosen DNR for themselves chose DNR for the video patient (100% v/s 62%). Age (OR=1.1/year) and white ethnicity (OR=9.43) predicted DNR choice for the video patient. Conclusion Ending DNR discussions with a question or a recommendation did not impact DNR choice or perception of physician compassion. Therefore, both approaches are clinically appropriate. All patients who chose DNR for themselves and most patients who did not choose DNR for themselves chose DNR for the video patient. Age and race predicted DNR choice. PMID:23564395
Lee, Chaewoo
2014-01-01
The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862
Di, Yan; Li, Ting; Zhu, Zhihong; Chen, Fen; Jia, Lianqun; Liu, Wenbing; Gai, Xiumei; Wang, Yingying; Pan, Weisan; Yang, Xinggang
2017-01-01
The aim of this study was to simultaneously introduce pH sensitivity and folic acid (FA) targeting into a micelle system to achieve quick drug release and to enhance its accumulation in tumor cells. Paclitaxel-(+)-α-tocopherol (PTX-VE)-loaded mixed micelles (PHIS/FA/PM) fabricated by poly(ethylene glycol) methyl ether-poly(histidine) (MPEG-PHIS) and folic acid-poly(ethylene glycol)-(+)-α-tocopherol (FA-PEG-VE) were characterized by dynamic light scattering and transmission electron microscopy (TEM). The mixed micelles had a spherical morphology with an average diameter of 137.0±6.70 nm and a zeta potential of -48.7±4.25 mV. The drug encapsulation and loading efficiencies were 91.06%±2.45% and 5.28%±0.30%, respectively. The pH sensitivity was confirmed by changes in particle size, critical micelle concentration, and transmittance as a function of pH. MTT assay showed that PHIS/FA/PM had higher cytotoxicity at pH 6.0 than at pH 7.4, and lower cytotoxicity in the presence of free FA. Confocal laser scanning microscope images demonstrated a time-dependent and FA-inhibited cellular uptake. In vivo imaging confirmed that the mixed micelles targeted accumulation at tumor sites and the tumor inhibition rate was 85.97%. The results proved that the mixed micelle system fabricated by MPEG-PHIS and FA-PEG-VE is a promising approach to improve antitumor efficacy.
Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.
Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk
2018-07-01
Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.
Merino, Aimee M; Greiner, Ryan; Hartwig, Kristopher
2017-09-01
Patient preferences regarding cardiopulmonary resuscitation (CPR) are important, especially during hospitalization when a patient's health is changing. Yet many patients are not adequately informed or involved in the decision-making process. We examined the effect of an informational video about CPR on hospitalized patients' code status choices. This was a prospective, randomized trial conducted at the Minneapolis Veterans Affairs Health Care System in Minnesota. We enrolled 119 patients, hospitalized on the general medicine service, and at least 65 years old. The majority were men (97%) with a mean age of 75. A video described code status choices: full code (CPR and intubation if required), do not resuscitate (DNR), and do not resuscitate/do not intubate (DNR/DNI). Participants were randomized to watch the video (n = 59) or usual care (n = 60). The primary outcome was participants' code status preferences. Secondary outcomes included a questionnaire designed to evaluate participants' trust in their healthcare team and knowledge and perceptions about CPR. Participants who viewed the video were less likely to choose full code (37%) compared to participants in the usual care group (71%) and more likely to choose DNR/DNI (56% in the video group vs. 17% in the control group) ( < 0.00001). We did not see a difference in trust in their healthcare team or knowledge and perceptions about CPR as assessed by our questionnaire. Hospitalized patients who watched a video about CPR and code status choices were less likely to choose full code and more likely to choose DNR/DNI. © 2017 Society of Hospital Medicine
Toward enhancing the distributed video coder under a multiview video codec framework
NASA Astrophysics Data System (ADS)
Lee, Shih-Chieh; Chen, Jiann-Jone; Tsai, Yao-Hong; Chen, Chin-Hua
2016-11-01
The advance of video coding technology enables multiview video (MVV) or three-dimensional television (3-D TV) display for users with or without glasses. For mobile devices or wireless applications, a distributed video coder (DVC) can be utilized to shift the encoder complexity to decoder under the MVV coding framework, denoted as multiview distributed video coding (MDVC). We proposed to exploit both inter- and intraview video correlations to enhance side information (SI) and improve the MDVC performance: (1) based on the multiview motion estimation (MVME) framework, a categorized block matching prediction with fidelity weights (COMPETE) was proposed to yield a high quality SI frame for better DVC reconstructed images. (2) The block transform coefficient properties, i.e., DCs and ACs, were exploited to design the priority rate control for the turbo code, such that the DVC decoding can be carried out with fewest parity bits. In comparison, the proposed COMPETE method demonstrated lower time complexity, while presenting better reconstructed video quality. Simulations show that the proposed COMPETE can reduce the time complexity of MVME to 1.29 to 2.56 times smaller, as compared to previous hybrid MVME methods, while the image peak signal to noise ratios (PSNRs) of a decoded video can be improved 0.2 to 3.5 dB, as compared to H.264/AVC intracoding.
Design of an injectable synthetic and biodegradable surgical biomaterial
Zawaneh, Peter N.; Singh, Sunil P.; Padera, Robert F.; Henderson, Peter W.; Spector, Jason A.; Putnam, David
2010-01-01
We report the design of an injectable synthetic and biodegradable polymeric biomaterial comprised of polyethylene glycol and a polycarbonate of dihydroxyacetone (MPEG-pDHA). MPEG-pDHA is a thixotropic physically cross-linked hydrogel, displays rapid chain relaxation, is easily extruded through narrow-gauge needles, biodegrades into inert products, and is well tolerated by soft tissues. We demonstrate the clinical utility of MPEG-pDHA in the prevention of seroma, a common postoperative complication following ablative and reconstructive surgeries, in an animal model of radical breast mastectomy. This polymer holds significant promise for clinical applicability in a host of surgical procedures ranging from cosmetic surgery to cancer resection. PMID:20534478
Film grain noise modeling in advanced video coding
NASA Astrophysics Data System (ADS)
Oh, Byung Tae; Kuo, C.-C. Jay; Sun, Shijun; Lei, Shawmin
2007-01-01
A new technique for film grain noise extraction, modeling and synthesis is proposed and applied to the coding of high definition video in this work. The film grain noise is viewed as a part of artistic presentation by people in the movie industry. On one hand, since the film grain noise can boost the natural appearance of pictures in high definition video, it should be preserved in high-fidelity video processing systems. On the other hand, video coding with film grain noise is expensive. It is desirable to extract film grain noise from the input video as a pre-processing step at the encoder and re-synthesize the film grain noise and add it back to the decoded video as a post-processing step at the decoder. Under this framework, the coding gain of the denoised video is higher while the quality of the final reconstructed video can still be well preserved. Following this idea, we present a method to remove film grain noise from image/video without distorting its original content. Besides, we describe a parametric model containing a small set of parameters to represent the extracted film grain noise. The proposed model generates the film grain noise that is close to the real one in terms of power spectral density and cross-channel spectral correlation. Experimental results are shown to demonstrate the efficiency of the proposed scheme.
NASA Astrophysics Data System (ADS)
Mohd Sabri, Siti Noorzidah bt; Abu, Norhidayah; Mastor, Azreena; Hisham, Siti Farhana; Noorsal, Kartini
2012-07-01
Star polymers have unique characteristics due to their well-defined size and tailor ability which makes these polymers attractive candidates as carriers in drug delivery system applications. This work focuses on attaching a drug to the star polymer (polyamidoamine). The conjugation of polyamidoamine (PAMAM, generation 4) with methotrexate (MTX) (model drug) was studied in which monomethyl polyethylene glycol (MPEG) was used as a linker to reduce the toxicity of dendrimer. Conjugation starts with attaching the drug to the linker and followed by further conjugation with the polyamidoamine (PAMAM) dendrimer. The conjugation of PAMAM-PEG-MTX was confirmed through UV-Vis, FTIR, 1H NMR and DSC. The loading capacities and release profile of this conjugate were determined using 1H NMR and UV spectrometer.
The Simple Video Coder: A free tool for efficiently coding social video data.
Barto, Daniel; Bird, Clark W; Hamilton, Derek A; Fink, Brandi C
2017-08-01
Videotaping of experimental sessions is a common practice across many disciplines of psychology, ranging from clinical therapy, to developmental science, to animal research. Audio-visual data are a rich source of information that can be easily recorded; however, analysis of the recordings presents a major obstacle to project completion. Coding behavior is time-consuming and often requires ad-hoc training of a student coder. In addition, existing software is either prohibitively expensive or cumbersome, which leaves researchers with inadequate tools to quickly process video data. We offer the Simple Video Coder-free, open-source software for behavior coding that is flexible in accommodating different experimental designs, is intuitive for students to use, and produces outcome measures of event timing, frequency, and duration. Finally, the software also offers extraction tools to splice video into coded segments suitable for training future human coders or for use as input for pattern classification algorithms.
Yao, Yuan; Shen, Heyun; Zhang, Guanghui; Yang, Jing; Jin, Xu
2014-10-01
We introduced thermo-sensitive poly(N-isopropylacrylamide) (PNIPAM) into the polymer structure of poly(ethylene glycol)-block-poly(phenylboronate ester) acrylate (MPEG-block-PPBDEMA) by block and random polymerization pathways in order to investigate the effect of polymer architecture on the glucose-responsiveness and enhance their insulin release controllability. By following the structure, the continuous PNIPAM shell of the triblock polymer MPEG-block-PNIPAM-block-PPBDEMA collapsing on the glucose-responsive PPBDEMA core formed the polymeric micelles with a core-shell-corona structure, and MPEG-block-(PNIPAM-rand-PPBDEMA) exhibited core-corona micelles in which the hydrophobic core consisted of PNIPAM and PPBDEMA segments when the environmental temperature was increased above low critical solution temperature (LCST) of PNIPAM. The micellar morphologies can be precisely controlled by temperature change between 15 and 37°C. As a result, the introduction of PNIPAM greatly enhanced the overall stability of insulin encapsulated in the polymeric micelles in the absence of glucose over incubation 80 h at 37°C. Comparing to MPEG-block-PNIPAM-block-PPBDEMA, the nanocarriers from MPEG-block-(PNIPAM-rand-PPBDEMA) showed great insulin release behavior which is zero insulin release without glucose, low release at normal blood glucose concentration (1.0 mg/mL). Therefore, these nanocarriers may be served as promising self-regulated insulin delivery system for diabetes treatment. Copyright © 2014 Elsevier Inc. All rights reserved.
Variable disparity-motion estimation based fast three-view video coding
NASA Astrophysics Data System (ADS)
Bae, Kyung-Hoon; Kim, Seung-Cheol; Hwang, Yong Seok; Kim, Eun-Soo
2009-02-01
In this paper, variable disparity-motion estimation (VDME) based 3-view video coding is proposed. In the encoding, key-frame coding (KFC) based motion estimation and variable disparity estimation (VDE) for effectively fast three-view video encoding are processed. These proposed algorithms enhance the performance of 3-D video encoding/decoding system in terms of accuracy of disparity estimation and computational overhead. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm's PSNRs is 37.66 and 40.55 dB, and the processing time is 0.139 and 0.124 sec/frame, respectively.
NASA Astrophysics Data System (ADS)
Roizard, D.; Kiryukhina, Y.; Masalev, A.; Khotimskiy, V.; Teplyakov, V.; Barth, D.
2013-03-01
There are several challenging separation problems in industries which can be solved with the help of membrane technologies. It is the case for instance of the purification of gas energy carriers (i.e. H2, CH4) from CO2 as well as the CO2 recovery from flue gas. Glassy polymers containing trimethylsilyl residues like poly(1-trimethylsilyl-1-propyne) [PTMSP] and polyvinyltrimethylsilane [PVTMS] are known to exhibit good membrane properties for gas separation. This paper reports two ways of improving their performances based on the controlled introduction of selective groups - alkyl imidazomium salts (C4I) and polyethyleneglycol (M-PEG)- able to enhance CO2 selectivity. CO2 Isotherm sorption data and permeability measurements have shown that the membrane performances could be significantly improved when C4I and M-PEG were introduced as residues covalently bounded to the main polymer chain. Moreover the introduced bromine reactive centres could also be used to induce chemical crosslinking giving rise to more resistant and stable membranes to organic vapours. With the C4I groups, the CO2 sorption could be enhanced by a factor 4.4.
Code inspection instructional validation
NASA Technical Reports Server (NTRS)
Orr, Kay; Stancil, Shirley
1992-01-01
The Shuttle Data Systems Branch (SDSB) of the Flight Data Systems Division (FDSD) at Johnson Space Center contracted with Southwest Research Institute (SwRI) to validate the effectiveness of an interactive video course on the code inspection process. The purpose of this project was to determine if this course could be effective for teaching NASA analysts the process of code inspection. In addition, NASA was interested in the effectiveness of this unique type of instruction (Digital Video Interactive), for providing training on software processes. This study found the Carnegie Mellon course, 'A Cure for the Common Code', effective for teaching the process of code inspection. In addition, analysts prefer learning with this method of instruction, or this method in combination with other methods. As is, the course is definitely better than no course at all; however, findings indicate changes are needed. Following are conclusions of this study. (1) The course is instructionally effective. (2) The simulation has a positive effect on student's confidence in his ability to apply new knowledge. (3) Analysts like the course and prefer this method of training, or this method in combination with current methods of training in code inspection, over the way training is currently being conducted. (4) Analysts responded favorably to information presented through scenarios incorporating full motion video. (5) Some course content needs to be changed. (6) Some content needs to be added to the course. SwRI believes this study indicates interactive video instruction combined with simulation is effective for teaching software processes. Based on the conclusions of this study, SwRI has outlined seven options for NASA to consider. SwRI recommends the option which involves creation of new source code and data files, but uses much of the existing content and design from the current course. Although this option involves a significant software development effort, SwRI believes this option will produce the most effective results.
Joint source-channel coding for motion-compensated DCT-based SNR scalable video.
Kondi, Lisimachos P; Ishtiaq, Faisal; Katsaggelos, Aggelos K
2002-01-01
In this paper, we develop an approach toward joint source-channel coding for motion-compensated DCT-based scalable video coding and transmission. A framework for the optimal selection of the source and channel coding rates over all scalable layers is presented such that the overall distortion is minimized. The algorithm utilizes universal rate distortion characteristics which are obtained experimentally and show the sensitivity of the source encoder and decoder to channel errors. The proposed algorithm allocates the available bit rate between scalable layers and, within each layer, between source and channel coding. We present the results of this rate allocation algorithm for video transmission over a wireless channel using the H.263 Version 2 signal-to-noise ratio (SNR) scalable codec for source coding and rate-compatible punctured convolutional (RCPC) codes for channel coding. We discuss the performance of the algorithm with respect to the channel conditions, coding methodologies, layer rates, and number of layers.
Mode-dependent templates and scan order for H.264/AVC-based intra lossless coding.
Gu, Zhouye; Lin, Weisi; Lee, Bu-Sung; Lau, Chiew Tong; Sun, Ming-Ting
2012-09-01
In H.264/advanced video coding (AVC), lossless coding and lossy coding share the same entropy coding module. However, the entropy coders in the H.264/AVC standard were original designed for lossy video coding and do not yield adequate performance for lossless video coding. In this paper, we analyze the problem with the current lossless coding scheme and propose a mode-dependent template (MD-template) based method for intra lossless coding. By exploring the statistical redundancy of the prediction residual in the H.264/AVC intra prediction modes, more zero coefficients are generated. By designing a new scan order for each MD-template, the scanned coefficients sequence fits the H.264/AVC entropy coders better. A fast implementation algorithm is also designed. With little computation increase, experimental results confirm that the proposed fast algorithm achieves about 7.2% bit saving compared with the current H.264/AVC fidelity range extensions high profile.
Low-delay predictive audio coding for the HIVITS HDTV codec
NASA Astrophysics Data System (ADS)
McParland, A. K.; Gilchrist, N. H. C.
1995-01-01
The status of work relating to predictive audio coding, as part of the European project on High Quality Video Telephone and HD(TV) Systems (HIVITS), is reported. The predictive coding algorithm is developed, along with six-channel audio coding and decoding hardware. Demonstrations of the audio codec operating in conjunction with the video codec, are given.
NASA Astrophysics Data System (ADS)
Tsang, Sik-Ho; Chan, Yui-Lam; Siu, Wan-Chi
2017-01-01
Weighted prediction (WP) is an efficient video coding tool that was introduced since the establishment of the H.264/AVC video coding standard, for compensating the temporal illumination change in motion estimation and compensation. WP parameters, including a multiplicative weight and an additive offset for each reference frame, are required to be estimated and transmitted to the decoder by slice header. These parameters cause extra bits in the coded video bitstream. High efficiency video coding (HEVC) provides WP parameter prediction to reduce the overhead. Therefore, WP parameter prediction is crucial to research works or applications, which are related to WP. Prior art has been suggested to further improve the WP parameter prediction by implicit prediction of image characteristics and derivation of parameters. By exploiting both temporal and interlayer redundancies, we propose three WP parameter prediction algorithms, enhanced implicit WP parameter, enhanced direct WP parameter derivation, and interlayer WP parameter, to further improve the coding efficiency of HEVC. Results show that our proposed algorithms can achieve up to 5.83% and 5.23% bitrate reduction compared to the conventional scalable HEVC in the base layer for SNR scalability and 2× spatial scalability, respectively.
Video transmission on ATM networks. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chen, Yun-Chung
1993-01-01
The broadband integrated services digital network (B-ISDN) is expected to provide high-speed and flexible multimedia applications. Multimedia includes data, graphics, image, voice, and video. Asynchronous transfer mode (ATM) is the adopted transport techniques for B-ISDN and has the potential for providing a more efficient and integrated environment for multimedia. It is believed that most broadband applications will make heavy use of visual information. The prospect of wide spread use of image and video communication has led to interest in coding algorithms for reducing bandwidth requirements and improving image quality. The major results of a study on the bridging of network transmission performance and video coding are: Using two representative video sequences, several video source models are developed. The fitness of these models are validated through the use of statistical tests and network queuing performance. A dual leaky bucket algorithm is proposed as an effective network policing function. The concept of the dual leaky bucket algorithm can be applied to a prioritized coding approach to achieve transmission efficiency. A mapping of the performance/control parameters at the network level into equivalent parameters at the video coding level is developed. Based on that, a complete set of principles for the design of video codecs for network transmission is proposed.
Study and simulation of low rate video coding schemes
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Chen, Yun-Chung; Kipp, G.
1992-01-01
The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design.
Innovative methods of knowledge transfer by multimedia library
NASA Astrophysics Data System (ADS)
Goanta, A. M.
2016-08-01
The present situation of teaching and learning new knowledge taught in the classroom is highly variable depending on the specific topics concerned. If we analyze the manifold ways of teaching / learning at university level, we can notice a very good combination between classical and modern methods. The first category includes the classic chalk blackboard teaching, followed by the also classical learning based on paper reference material. The second category includes books published in PDF or PPT [1], which are printed on the type backing CD / DVD. Since 2006 the author was concerned about the transfer of information and knowledge through video files like AVI, FLV or MPEG using various means of transfer, from the free ones (via Internet) and continuing with those involving minimal costs, i.e. on CD / DVD support. Encouraged by the students’ interest in this kind of teaching material as proved by monitoring [2] the site http://www.cursuriuniversitarebraila.ugal.ro, the author has managed to publish with ISBN the first video book in Romania, which has a non conformist content in that the chapters are located not by paging but by the hour or minutes of shooting when they were made.
Two Droplets on Wire Approaching Ignition
NASA Technical Reports Server (NTRS)
2003-01-01
The Fiber-Supported Droplet Combustion (FSDC) uses two droplets positioned on the fiber wire, instead of the usual one. Two droplets more closely simulates the environment in engines, which ignite many fuel droplets at once. The behavior of the burning was also unexpected -- the droplets moved together after ignition, generating quite a bit of data for understanding the interaction of fuel droplets while they burn. This MPEG movie (1.3 MB) shows a time-lapse of this burn (3x speed). Because FSDC is backlit (the bright glow behind the drops), you carnot see the glow of the droplets while they burn -- instead, you see them shrink! The small blobs left on the wire after the burn are the beads used to center the fuel droplet on the wire. This image was taken on STS-94, July 12, 1997, MET:10/19:13 (approximate). FSDC-2 studied fundamental phenomena related to liquid fuel droplet combustion in air. Pure fuels and mixtures of fuels were burned as isolated single and dual droplets with and without forced air convection. The FSDC guest investigator was Forman Williams, University of California, San Diego. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1R mission (STS-94, July 1-17 1997). Advanced combustion experiments will be a part of investigations planned for the International Space Station. (1.3MB, 12-second MPEG, screen 320 x 240 pixels; downlinked video, higher quality not available) A still JPG composite of this movie is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300178.html.
Zheng, Jiani; Xie, Hongguo; Yu, Weiting; Tan, Mingqian; Gong, Faquan; Liu, Xiudong; Wang, Feng; Lv, Guojun; Liu, Wanfa; Zheng, Guoshuang; Yang, Yan; Xie, Weiyang; Ma, Xiaojun
2012-09-18
Alginate/chitosan/alginate (ACA) hydrogel microcapsules were modified with methoxy poly(ethylene glycol) (MPEG) to improve protein repellency and biocompatibility. Increased MPEG surface graft density (n(S)) on hydrogel microcapsules was achieved by controlling the grafting parameters including the buffer layer substrate, membrane thickness, and grafting method. X-ray photoelectron spectroscopy (XPS) model was employed to quantitatively analyze n(S) on this three-dimensional (3D) hydrogel network structure. Our results indicated that neutralizing with alginate, increasing membrane thickness, and in situ covalent grafting could increase n(S) effectively. ACAC(PEG) was more promising than ACC(PEG) in protein repellency because alginate supplied more -COO(-) negative binding sites and prevented MPEG from diffusing. The n(S) increased with membrane thickness, showing better protein repellency. Moreover, the in situ covalent grafting provided an effective way to enhance n(S), and 1.00 ± 0.03 chains/nm(2) was achieved, exhibiting almost complete immunity to protein adsorption. This antifouling hydrogel biomaterial is expected to be useful in transplantation in vivo.
Digital television system design study
NASA Technical Reports Server (NTRS)
Huth, G. K.
1976-01-01
The use of digital techniques for transmission of pictorial data is discussed for multi-frame images (television). Video signals are processed in a manner which includes quantization and coding such that they are separable from the noise introduced into the channel. The performance of digital television systems is determined by the nature of the processing techniques (i.e., whether the video signal itself or, instead, something related to the video signal is quantized and coded) and to the quantization and coding schemes employed.
Spatial Pyramid Covariance based Compact Video Code for Robust Face Retrieval in TV-series.
Li, Yan; Wang, Ruiping; Cui, Zhen; Shan, Shiguang; Chen, Xilin
2016-10-10
We address the problem of face video retrieval in TV-series which searches video clips based on the presence of specific character, given one face track of his/her. This is tremendously challenging because on one hand, faces in TV-series are captured in largely uncontrolled conditions with complex appearance variations, and on the other hand retrieval task typically needs efficient representation with low time and space complexity. To handle this problem, we propose a compact and discriminative representation for the huge body of video data, named Compact Video Code (CVC). Our method first models the face track by its sample (i.e., frame) covariance matrix to capture the video data variations in a statistical manner. To incorporate discriminative information and obtain more compact video signature suitable for retrieval, the high-dimensional covariance representation is further encoded as a much lower-dimensional binary vector, which finally yields the proposed CVC. Specifically, each bit of the code, i.e., each dimension of the binary vector, is produced via supervised learning in a max margin framework, which aims to make a balance between the discriminability and stability of the code. Besides, we further extend the descriptive granularity of covariance matrix from traditional pixel-level to more general patchlevel, and proceed to propose a novel hierarchical video representation named Spatial Pyramid Covariance (SPC) along with a fast calculation method. Face retrieval experiments on two challenging TV-series video databases, i.e., the Big Bang Theory and Prison Break, demonstrate the competitiveness of the proposed CVC over state-of-the-art retrieval methods. In addition, as a general video matching algorithm, CVC is also evaluated in traditional video face recognition task on a standard Internet database, i.e., YouTube Celebrities, showing its quite promising performance by using an extremely compact code with only 128 bits.
Di, Yan; Li, Ting; Zhu, Zhihong; Chen, Fen; Jia, Lianqun; Liu, Wenbing; Gai, Xiumei; Wang, Yingying; Pan, Weisan; Yang, Xinggang
2017-01-01
The aim of this study was to simultaneously introduce pH sensitivity and folic acid (FA) targeting into a micelle system to achieve quick drug release and to enhance its accumulation in tumor cells. Paclitaxel-(+)-α-tocopherol (PTX-VE)-loaded mixed micelles (PHIS/FA/PM) fabricated by poly(ethylene glycol) methyl ether-poly(histidine) (MPEG-PHIS) and folic acid-poly(ethylene glycol)-(+)-α-tocopherol (FA-PEG-VE) were characterized by dynamic light scattering and transmission electron microscopy (TEM). The mixed micelles had a spherical morphology with an average diameter of 137.0±6.70 nm and a zeta potential of −48.7±4.25 mV. The drug encapsulation and loading efficiencies were 91.06%±2.45% and 5.28%±0.30%, respectively. The pH sensitivity was confirmed by changes in particle size, critical micelle concentration, and transmittance as a function of pH. MTT assay showed that PHIS/FA/PM had higher cytotoxicity at pH 6.0 than at pH 7.4, and lower cytotoxicity in the presence of free FA. Confocal laser scanning microscope images demonstrated a time-dependent and FA-inhibited cellular uptake. In vivo imaging confirmed that the mixed micelles targeted accumulation at tumor sites and the tumor inhibition rate was 85.97%. The results proved that the mixed micelle system fabricated by MPEG-PHIS and FA-PEG-VE is a promising approach to improve antitumor efficacy. PMID:28860753
Music Identification System Using MPEG-7 Audio Signature Descriptors
You, Shingchern D.; Chen, Wei-Hwa; Chen, Woei-Kae
2013-01-01
This paper describes a multiresolution system based on MPEG-7 audio signature descriptors for music identification. Such an identification system may be used to detect illegally copied music circulated over the Internet. In the proposed system, low-resolution descriptors are used to search likely candidates, and then full-resolution descriptors are used to identify the unknown (query) audio. With this arrangement, the proposed system achieves both high speed and high accuracy. To deal with the problem that a piece of query audio may not be inside the system's database, we suggest two different methods to find the decision threshold. Simulation results show that the proposed method II can achieve an accuracy of 99.4% for query inputs both inside and outside the database. Overall, it is highly possible to use the proposed system for copyright control. PMID:23533359
From 16-bit to high-accuracy IDCT approximation: fruits of single architecture affliation
NASA Astrophysics Data System (ADS)
Liu, Lijie; Tran, Trac D.; Topiwala, Pankaj
2007-09-01
In this paper, we demonstrate an effective unified framework for high-accuracy approximation of the irrational co-effcient floating-point IDCT by a single integer-coeffcient fixed-point architecture. Our framework is based on a modified version of the Loeffler's sparse DCT factorization, and the IDCT architecture is constructed via a cascade of dyadic lifting steps and butterflies. We illustrate that simply varying the accuracy of the approximating parameters yields a large family of standard-compliant IDCTs, from rare 16-bit approximations catering to portable computing to ultra-high-accuracy 32-bit versions that virtually eliminate any drifting effect when pairing with the 64-bit floating-point IDCT at the encoder. Drifting performances of the proposed IDCTs along with existing popular IDCT algorithms in H.263+, MPEG-2 and MPEG-4 are also demonstrated.
NASA Astrophysics Data System (ADS)
Pei, Yong; Modestino, James W.
2007-12-01
We describe a multilayered video transport scheme for wireless channels capable of adapting to channel conditions in order to maximize end-to-end quality of service (QoS). This scheme combines a scalable H.263+ video source coder with unequal error protection (UEP) across layers. The UEP is achieved by employing different channel codes together with a multiresolution modulation approach to transport the different priority layers. Adaptivity to channel conditions is provided through a joint source-channel coding (JSCC) approach which attempts to jointly optimize the source and channel coding rates together with the modulation parameters to obtain the maximum achievable end-to-end QoS for the prevailing channel conditions. In this work, we model the wireless links as slow-fading Rician channel where the channel conditions can be described in terms of the channel signal-to-noise ratio (SNR) and the ratio of specular-to-diffuse energy[InlineEquation not available: see fulltext.]. The multiresolution modulation/coding scheme consists of binary rate-compatible punctured convolutional (RCPC) codes used together with nonuniform phase-shift keyed (PSK) signaling constellations. Results indicate that this adaptive JSCC scheme employing scalable video encoding together with a multiresolution modulation/coding approach leads to significant improvements in delivered video quality for specified channel conditions. In particular, the approach results in considerably improved graceful degradation properties for decreasing channel SNR.
High-definition video display based on the FPGA and THS8200
NASA Astrophysics Data System (ADS)
Qian, Jia; Sui, Xiubao
2014-11-01
This paper presents a high-definition video display solution based on the FPGA and THS8200. THS8200 is a video decoder chip launched by TI company, this chip has three 10-bit DAC channels which can capture video data in both 4:2:2 and 4:4:4 formats, and its data synchronization can be either through the dedicated synchronization signals HSYNC and VSYNC, or extracted from the embedded video stream synchronization information SAV / EAV code. In this paper, we will utilize the address and control signals generated by FPGA to access to the data-storage array, and then the FPGA generates the corresponding digital video signals YCbCr. These signals combined with the synchronization signals HSYNC and VSYNC that are also generated by the FPGA act as the input signals of THS8200. In order to meet the bandwidth requirements of the high-definition TV, we adopt video input in the 4:2:2 format over 2×10-bit interface. THS8200 is needed to be controlled by FPGA with I2C bus to set the internal registers, and as a result, it can generate the synchronous signal that is satisfied with the standard SMPTE and transfer the digital video signals YCbCr into analog video signals YPbPr. Hence, the composite analog output signals YPbPr are consist of image data signal and synchronous signal which are superimposed together inside the chip THS8200. The experimental research indicates that the method presented in this paper is a viable solution for high-definition video display, which conforms to the input requirements of the new high-definition display devices.
Real-time data compression of broadcast video signals
NASA Technical Reports Server (NTRS)
Shalkauser, Mary Jo W. (Inventor); Whyte, Wayne A., Jr. (Inventor); Barnes, Scott P. (Inventor)
1991-01-01
A non-adaptive predictor, a nonuniform quantizer, and a multi-level Huffman coder are incorporated into a differential pulse code modulation system for coding and decoding broadcast video signals in real time.
NASA Astrophysics Data System (ADS)
Jubran, Mohammad K.; Bansal, Manu; Kondi, Lisimachos P.
2006-01-01
In this paper, we consider the problem of optimal bit allocation for wireless video transmission over fading channels. We use a newly developed hybrid scalable/multiple-description codec that combines the functionality of both scalable and multiple-description codecs. It produces a base layer and multiple-description enhancement layers. Any of the enhancement layers can be decoded (in a non-hierarchical manner) with the base layer to improve the reconstructed video quality. Two different channel coding schemes (Rate-Compatible Punctured Convolutional (RCPC)/Cyclic Redundancy Check (CRC) coding and, product code Reed Solomon (RS)+RCPC/CRC coding) are used for unequal error protection of the layered bitstream. Optimal allocation of the bitrate between source and channel coding is performed for discrete sets of source coding rates and channel coding rates. Experimental results are presented for a wide range of channel conditions. Also, comparisons with classical scalable coding show the effectiveness of using hybrid scalable/multiple-description coding for wireless transmission.
Study and validation of tools interoperability in JPSEC
NASA Astrophysics Data System (ADS)
Conan, V.; Sadourny, Y.; Jean-Marie, K.; Chan, C.; Wee, S.; Apostolopoulos, J.
2005-08-01
Digital imagery is important in many applications today, and the security of digital imagery is important today and is likely to gain in importance in the near future. The emerging international standard ISO/IEC JPEG-2000 Security (JPSEC) is designed to provide security for digital imagery, and in particular digital imagery coded with the JPEG-2000 image coding standard. One of the primary goals of a standard is to ensure interoperability between creators and consumers produced by different manufacturers. The JPSEC standard, similar to the popular JPEG and MPEG family of standards, specifies only the bitstream syntax and the receiver's processing, and not how the bitstream is created or the details of how it is consumed. This paper examines the interoperability for the JPSEC standard, and presents an example JPSEC consumption process which can provide insights in the design of JPSEC consumers. Initial interoperability tests between different groups with independently created implementations of JPSEC creators and consumers have been successful in providing the JPSEC security services of confidentiality (via encryption) and authentication (via message authentication codes, or MACs). Further interoperability work is on-going.
Zero-block mode decision algorithm for H.264/AVC.
Lee, Yu-Ming; Lin, Yinyi
2009-03-01
In the previous paper , we proposed a zero-block intermode decision algorithm for H.264 video coding based upon the number of zero-blocks of 4 x 4 DCT coefficients between the current macroblock and the co-located macroblock. The proposed algorithm can achieve significant improvement in computation, but the computation performance is limited for high bit-rate coding. To improve computation efficiency, in this paper, we suggest an enhanced zero-block decision algorithm, which uses an early zero-block detection method to compute the number of zero-blocks instead of direct DCT and quantization (DCT/Q) calculation and incorporates two adequate decision methods into semi-stationary and nonstationary regions of a video sequence. In addition, the zero-block decision algorithm is also applied to the intramode prediction in the P frame. The enhanced zero-block decision algorithm brings out a reduction of average 27% of total encoding time compared to the zero-block decision algorithm.
Volandes, Angelo E; Levin, Tomer T; Slovin, Susan; Carvajal, Richard D; O'Reilly, Eileen M; Keohan, Mary Louise; Theodoulou, Maria; Dickler, Maura; Gerecitano, John F; Morris, Michael; Epstein, Andrew S; Naka-Blackstone, Anastazia; Walker-Corkery, Elizabeth S; Chang, Yuchiao; Noy, Ariela
2012-09-01
The authors tested whether an educational video on the goals of care in advanced cancer (life-prolonging care, basic care, or comfort care) helped patients understand these goals and had an impact on their preferences for resuscitation. A survey of 80 patients with advanced cancer was conducted before and after they viewed an educational video. The outcomes of interest included changes in goals of care preference and knowledge and consistency of preferences with code status. Before viewing the video, 10 patients (13%) preferred life-prolonging care, 24 patients (30%) preferred basic care, 29 patients (36%) preferred comfort care, and 17 patients (21%) were unsure. Preferences did not change after the video, when 9 patients (11%) chose life-prolonging care, 28 patients (35%) chose basic care, 29 patients (36%) chose comfort care, and, 14 patients (18%) were unsure (P = .28). Compared with baseline, after the video presentation, more patients did not want cardiopulmonary resuscitation (CPR) (71% vs 62%; P = .03) or ventilation (80% vs 67%; P = .008). Knowledge about goals of care and likelihood of resuscitation increased after the video (P < .001). Of the patients who did not want CPR or ventilation after the video augmentation, only 4 patients (5%) had a documented do-not-resuscitate order in their medical record (kappa statistic, -0.01; 95% confidence interval, -0.06 to 0.04). Acceptability of the video was high. Patients with advanced cancer did not change care preferences after viewing the video, but fewer wanted CPR or ventilation. Documented code status was inconsistent with patient preferences. Patients were more knowledgeable after the video, reported that the video was acceptable, and said they would recommend it to others. The current results indicated that this type of video may enable patients to visualize "goals of care," enriching patient understanding of worsening health states and better informing decision making. Copyright © 2012 American Cancer Society.
Picture data compression coder using subband/transform coding with a Lempel-Ziv-based coder
NASA Technical Reports Server (NTRS)
Glover, Daniel R. (Inventor)
1995-01-01
Digital data coders/decoders are used extensively in video transmission. A digitally encoded video signal is separated into subbands. Separating the video into subbands allows transmission at low data rates. Once the data is separated into these subbands it can be coded and then decoded by statistical coders such as the Lempel-Ziv based coder.
Experimental design and analysis of JND test on coded image/video
NASA Astrophysics Data System (ADS)
Lin, Joe Yuchieh; Jin, Lina; Hu, Sudeng; Katsavounidis, Ioannis; Li, Zhi; Aaron, Anne; Kuo, C.-C. Jay
2015-09-01
The visual Just-Noticeable-Difference (JND) metric is characterized by the detectable minimum amount of two visual stimuli. Conducting the subjective JND test is a labor-intensive task. In this work, we present a novel interactive method in performing the visual JND test on compressed image/video. JND has been used to enhance perceptual visual quality in the context of image/video compression. Given a set of coding parameters, a JND test is designed to determine the distinguishable quality level against a reference image/video, which is called the anchor. The JND metric can be used to save coding bitrates by exploiting the special characteristics of the human visual system. The proposed JND test is conducted using a binary-forced choice, which is often adopted to discriminate the difference in perception in a psychophysical experiment. The assessors are asked to compare coded image/video pairs and determine whether they are of the same quality or not. A bisection procedure is designed to find the JND locations so as to reduce the required number of comparisons over a wide range of bitrates. We will demonstrate the efficiency of the proposed JND test, report experimental results on the image and video JND tests.
Portrayal of smokeless tobacco in YouTube videos.
Bromberg, Julie E; Augustson, Erik M; Backinger, Cathy L
2012-04-01
Videos of smokeless tobacco (ST) on YouTube are abundant and easily accessible, yet no studies have examined the content of ST videos. This study assesses the overall portrayal, genre, and messages of ST YouTube videos. In August 2010, researchers identified the top 20 search results on YouTube by "relevance" and "view count" for the following search terms: "ST," "chewing tobacco," "snus," and "Skoal." After eliminating videos that were not about ST (n = 26), non-English (n = 14), or duplicate (n = 42), a final sample of 78 unique videos was coded for overall portrayal, genre, and various content measures. Among the 78 unique videos, 15.4% were anti-ST, while 74.4% were pro-ST. Researchers were unable to determine the portrayal of ST in the remaining 10.3% of videos because they involved excessive or "sensationalized" use of the ST, which could be interpreted either positively or negatively, depending on the viewer. The most common ST genre was positive video diaries (or "vlogs"), which made up almost one third of the videos (29.5%), followed by promotional advertisements (20.5%) and anti-ST public service announcements (12.8%). While YouTube is intended for user-generated content, 23.1% of the videos were created by professional organizations. These results demonstrate that ST videos on YouTube are overwhelmingly pro-ST. More research is needed to determine who is viewing these ST YouTube videos and how they may affect people's knowledge, attitudes, and behaviors regarding ST use.
McCannon, Jessica B; O'Donnell, Walter J; Thompson, B Taylor; El-Jawahri, Areej; Chang, Yuchiao; Ananian, Lillian; Bajwa, Ednan K; Currier, Paul F; Parikh, Mihir; Temel, Jennifer S; Cooper, Zara; Wiener, Renda Soylemez; Volandes, Angelo E
2012-12-01
Effective communication between intensive care unit (ICU) providers and families is crucial given the complexity of decisions made regarding goals of therapy. Using video images to supplement medical discussions is an innovative process to standardize and improve communication. In this six-month, quasi-experimental, pre-post intervention study we investigated the impact of a cardiopulmonary resuscitation (CPR) video decision support tool upon knowledge about CPR among surrogate decision makers for critically ill adults. We interviewed surrogate decision makers for patients aged 50 and over, using a structured questionnaire that included a four-question CPR knowledge assessment similar to those used in previous studies. Surrogates in the post-intervention arm viewed a three-minute video decision support tool about CPR before completing the knowledge assessment and completed questions about perceived value of the video. We recruited 23 surrogates during the first three months (pre-intervention arm) and 27 surrogates during the latter three months of the study (post-intervention arm). Surrogates viewing the video had more knowledge about CPR (p=0.008); average scores were 2.0 (SD 1.1) and 2.9 (SD 1.2) (out of a total of 4) in pre-intervention and post-intervention arms. Surrogates who viewed the video were comfortable with its content (81% very) and 81% would recommend the video. CPR preferences for patients at the time of ICU discharge/death were distributed as follows: pre-intervention: full code 78%, DNR 22%; post-intervention: full code 59%, DNR 41% (p=0.23).
Wu, Xiang Lan; Kim, Jong Ho; Koo, Heebeom; Bae, Sang Mun; Shin, Hyeri; Kim, Min Sang; Lee, Byung-Heon; Park, Rang-Woon; Kim, In-San; Choi, Kuiwon; Kwon, Ick Chan; Kim, Kwangmeyung; Lee, Doo Sung
2010-02-17
Herein, we prepared tumor-targeting peptide (AP peptide; CRKRLDRN) conjugated pH-responsive polymeric micelles (pH-PMs) in cancer therapy by active and pH-responsive tumor targeting delivery systems, simultaneously. The active tumor targeting and tumoral pH-responsive polymeric micelles were prepared by mixing AP peptide conjugated PEG-poly(d,l-lactic acid) block copolymer (AP-PEG-PLA) into the pH-responsive micelles of methyl ether poly(ethylene glycol) (MPEG)-poly(beta-amino ester) (PAE) block copolymer (MPEG-PAE). These mixed amphiphilic block copolymers were self-assembled to form stable AP peptide-conjugated and pH-responsive AP-PEG-PLA/MPEG-PAE micelles (AP-pH-PMs) with an average size of 150 nm. The AP-pH-PMs containing 10 wt % of AP-PEG-PLA showed a sharp pH-dependent micellization/demicellization transition at the tumoral acid pH. Also, they presented the pH-dependent drug release profile at the acidic pH of 6.4. The fluorescence dye, TRITC, encapsulated AP-pH-PMs (TRITC-AP-pH-PMs) presented the higher tumor-specific targeting ability in vitro cancer cell culture system and in vivo tumor-bearing mice, compared to control pH-responsive micelles of MPEG-PAE. For the cancer therapy, the anticancer drug, doxorubicin (DOX), was efficiently encapsulated into the AP-pH-PMs (DOX-AP-pH-PMs) with a higher loading efficiency. DOX-AP-pH-PMs efficiently deliver anticancer drugs in MDA-MB231 human breast tumor-bearing mice, resulted in excellent anticancer therapeutic efficacy, compared to free DOX and DOX encapsulated MEG-PAE micelles, indicating the excellent tumor targeting ability of AP-pH-PMs. Therefore, these tumor-targeting peptide-conjugated and pH-responsive polymeric micelles have great potential application in cancer therapy.
Scalable video transmission over Rayleigh fading channels using LDPC codes
NASA Astrophysics Data System (ADS)
Bansal, Manu; Kondi, Lisimachos P.
2005-03-01
In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.
NASA Astrophysics Data System (ADS)
Palma, V.; Carli, M.; Neri, A.
2011-02-01
In this paper a Multi-view Distributed Video Coding scheme for mobile applications is presented. Specifically a new fusion technique between temporal and spatial side information in Zernike Moments domain is proposed. Distributed video coding introduces a flexible architecture that enables the design of very low complex video encoders compared to its traditional counterparts. The main goal of our work is to generate at the decoder the side information that optimally blends temporal and interview data. Multi-view distributed coding performance strongly depends on the side information quality built at the decoder. At this aim for improving its quality a spatial view compensation/prediction in Zernike moments domain is applied. Spatial and temporal motion activity have been fused together to obtain the overall side-information. The proposed method has been evaluated by rate-distortion performances for different inter-view and temporal estimation quality conditions.
Anticancer effect and mechanism of polymer micelle-encapsulated quercetin on ovarian cancer
NASA Astrophysics Data System (ADS)
Gao, Xiang; Wang, Bilan; Wei, Xiawei; Men, Ke; Zheng, Fengjin; Zhou, Yingfeng; Zheng, Yu; Gou, Maling; Huang, Meijuan; Guo, Gang; Huang, Ning; Qian, Zhiyong; Wei, Yuquan
2012-10-01
Encapsulation of hydrophobic agents in polymer micelles can improve the water solubility of cargos, contributing to develop novel drugs. Quercetin (QU) is a hydrophobic agent with potential anticancer activity. In this work, we encapsulated QU into biodegradable monomethoxy poly(ethylene glycol)-poly(ε-caprolactone) (MPEG-PCL) micelles and tried to provide proof-of-principle for treating ovarian cancer with this nano-formulation of quercetin. These QU loaded MPEG-PCL (QU/MPEG-PCL) micelles with drug loading of 6.9% had a mean particle size of 36 nm, rendering the complete dispersion of quercetin in water. QU inhibited the growth of A2780S ovarian cancer cells on a dose dependent manner in vitro. Intravenous administration of QU/MPEG-PCL micelles significantly suppressed the growth of established xenograft A2780S ovarian tumors through causing cancer cell apoptosis and inhibiting angiogenesis in vivo. Furthermore, the anticancer activity of quercetin on ovarian cancer cells was studied in vitro. Quercetin treatment induced the apoptosis of A2780S cells associated with activating caspase-3 and caspase-9. MCL-1 downregulation, Bcl-2 downregulation, Bax upregulation and mitochondrial transmembrane potential change were observed, suggesting that quercetin may induce apoptosis of A2780S cells through the mitochondrial apoptotic pathway. Otherwise, quercetin treatment decreased phosphorylated p44/42 mitogen-activated protein kinase and phosphorylated Akt, contributing to inhibition of A2780S cell proliferation. Our data suggested that QU/MPEG-PCL micelles were a novel nano-formulation of quercetin with a potential clinical application in ovarian cancer therapy.
Waran, Vicknes; Bahuri, Nor Faizal Ahmad; Narayanan, Vairavan; Ganesan, Dharmendra; Kadir, Khairul Azmi Abdul
2012-04-01
The purpose of this study was to validate and assess the accuracy and usefulness of sending short video clips in 3gp file format of an entire scan series of patients, using mobile telephones running on 3G-MMS technology, to enable consultation between junior doctors in a neurosurgical unit and the consultants on-call after office hours. A total of 56 consecutive patients with acute neurosurgical problems requiring urgent after-hours consultation during a 6-month period, prospectively had their images recorded and transmitted using the above method. The response to the diagnosis and the management plan by two neurosurgeons (who were not on site) based on the images viewed on a mobile telephone were reviewed by an independent observer and scored. In addition to this, a radiologist reviewed the original images directly on the hospital's Patients Archiving and Communication System (PACS) and this was compared with the neurosurgeons' response. Both neurosurgeons involved in this study were in complete agreement with their diagnosis. The radiologist disagreed with the diagnosis in only one patient, giving a kappa coefficient of 0.88, indicating an almost perfect agreement. The use of mobile telephones to transmit MPEG video clips of radiological images is very advantageous for carrying out emergency consultations in neurosurgery. The images accurately reflect the pathology in question, thereby reducing the incidence of medical errors from incorrect diagnosis, which otherwise may just depend on a verbal description.
Investigating the structure preserving encryption of high efficiency video coding (HEVC)
NASA Astrophysics Data System (ADS)
Shahid, Zafar; Puech, William
2013-02-01
This paper presents a novel method for the real-time protection of new emerging High Efficiency Video Coding (HEVC) standard. Structure preserving selective encryption is being performed in CABAC entropy coding module of HEVC, which is significantly different from CABAC entropy coding of H.264/AVC. In CABAC of HEVC, exponential Golomb coding is replaced by truncated Rice (TR) up to a specific value for binarization of transform coefficients. Selective encryption is performed using AES cipher in cipher feedback mode on a plaintext of binstrings in a context aware manner. The encrypted bitstream has exactly the same bit-rate and is format complaint. Experimental evaluation and security analysis of the proposed algorithm is performed on several benchmark video sequences containing different combinations of motion, texture and objects.
A reaction-diffusion-based coding rate control mechanism for camera sensor networks.
Yamamoto, Hiroshi; Hyodo, Katsuya; Wakamiya, Naoki; Murata, Masayuki
2010-01-01
A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.
ERIC Educational Resources Information Center
Harrell, Mary L.; Bergbreiter, David E.
2017-01-01
The use of [superscript 1]H NMR spectroscopy to analyze the number-average molecular weight of a methoxy poly(ethylene glycol) (MPEG) and an acetate derivative of this MPEG is described. These analyses illustrate NMR principles associated with the chemical shift differences of protons in different environments, NMR integration, and the effect of…
Wang, Tao; Jiang, Xue-Jun; Lin, Tao; Ren, Shan; Li, Xiao-Yan; Zhang, Xian-Zheng; Tang, Qi-zhu
2009-09-01
Erythropoietin (EPO) can protect myocardium from ischemic injury, but it also plays an important role in promoting polycythaemia, the potential for thrombo-embolic complications. Local sustained delivery of bioactive agents directly to impaired tissues using biomaterials is an approach to limit systemic toxicity and improve the efficacy of therapies. The present study was performed to investigate whether local intramyocardial injection of EPO with hydrogel could enhance cardioprotective effect without causing polycythaemia after myocardial infarction (MI). To test the hypothesis, phosphate buffered solution (PBS), alpha-cyclodextrin/MPEG-PCL-MPEG hydrogel, recombined human erythropoietin (rhEPO) in PBS, or rhEPO in hydrogel were injected into the infarcted area immediately after MI in rats. The hydrogel allowed a sustained release of EPO, which inhibited cell apoptosis and increased neovasculature formation, and subsequently reduced infarct size and improved cardiac function compared with other groups. Notably, there was no evidence of polycythaemia from this therapy, with no differences in erythrocyte count and hematocrit compared with the animals received PBS or hydrogel blank injection. In conclusion, intramyocardial delivery of rhEPO with alpha-cyclodextrin/MPEG-PCL-MPEG hydrogel may lead to cardiac performance improvement after MI without apparent adverse effect.
Countermeasures for unintentional and intentional video watermarking attacks
NASA Astrophysics Data System (ADS)
Deguillaume, Frederic; Csurka, Gabriela; Pun, Thierry
2000-05-01
These last years, the rapidly growing digital multimedia market has revealed an urgent need for effective copyright protection mechanisms. Therefore, digital audio, image and video watermarking has recently become a very active area of research, as a solution to this problem. Many important issues have been pointed out, one of them being the robustness to non-intentional and intentional attacks. This paper studies some attacks and proposes countermeasures applied to videos. General attacks are lossy copying/transcoding such as MPEG compression and digital/analog (D/A) conversion, changes of frame-rate, changes of display format, and geometrical distortions. More specific attacks are sequence edition, and statistical attacks such as averaging or collusion. Averaging attack consists of averaging locally consecutive frames to cancel the watermark. This attack works well for schemes which embed random independent marks into frames. In the collusion attack the watermark is estimated from single frames (based on image denoising), and averaged over different scenes for better accuracy. The estimated watermark is then subtracted from each frame. Collusion requires that the same mark is embedded into all frames. The proposed countermeasures first ensures robustness to general attacks by spread spectrum encoding in the frequency domain and by the use of an additional template. Secondly, a Bayesian criterion, evaluating the probability of a correctly decoded watermark, is used for rejection of outliers, and to implement an algorithm against statistical attacks. The idea is to embed randomly chosen marks among a finite set of marks, into subsequences of videos which are long enough to resist averaging attacks, but short enough to avoid collusion attacks. The Bayesian criterion is needed to select the correct mark at the decoding step. Finally, the paper presents experimental results showing the robustness of the proposed method.
Portrayal of tobacco in Mongolian language YouTube videos: policy gaps.
Tsai, Feng-Jen; Sainbayar, Bolor
2016-07-01
This study examined how effectively current policy measures control depictions of tobacco in Mongolian language YouTube videos. A search of YouTube videos using the Mongolian term for 'tobacco', and employing 'relevance' and 'view count' criteria, resulted in a total sample of 120 videos, from which 38 unique videos were coded and analysed. Most videos were antismoking public service announcements; however, analyses of viewing patterns showed that pro-smoking videos accounted for about two-thirds of all views. Pro-smoking videos were also perceived more positively and had a like:dislike ratio of 4.6 compared with 3.5 and 1.5, respectively, for the magic trick and antismoking videos. Although Mongolia prohibits tobacco advertising, 3 of the pro-smoking videos were made by a tobacco company; additionally, 1 pro-smoking video promoted electronic cigarettes. Given the popularity of Mongolian YouTube videos that promote smoking, policy changes are urgently required to control this medium, and more effectively protect youth and young adults from insidious tobacco marketing. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Performance comparison of AV1, HEVC, and JVET video codecs on 360 (spherical) video
NASA Astrophysics Data System (ADS)
Topiwala, Pankaj; Dai, Wei; Krishnan, Madhu; Abbas, Adeel; Doshi, Sandeep; Newman, David
2017-09-01
This paper compares the coding efficiency performance on 360 videos, of three software codecs: (a) AV1 video codec from the Alliance for Open Media (AOM); (b) the HEVC Reference Software HM; and (c) the JVET JEM Reference SW. Note that 360 video is especially challenging content, in that one codes full res globally, but typically looks locally (in a viewport), which magnifies errors. These are tested in two different projection formats ERP and RSP, to check consistency. Performance is tabulated for 1-pass encoding on two fronts: (1) objective performance based on end-to-end (E2E) metrics such as SPSNR-NN, and WS-PSNR, currently developed in the JVET committee; and (2) informal subjective assessment of static viewports. Constant quality encoding is performed with all the three codecs for an unbiased comparison of the core coding tools. Our general conclusion is that under constant quality coding, AV1 underperforms HEVC, which underperforms JVET. We also test with rate control, where AV1 currently underperforms the open source X265 HEVC codec. Objective and visual evidence is provided.
Spatial correlation-based side information refinement for distributed video coding
NASA Astrophysics Data System (ADS)
Taieb, Mohamed Haj; Chouinard, Jean-Yves; Wang, Demin
2013-12-01
Distributed video coding (DVC) architecture designs, based on distributed source coding principles, have benefitted from significant progresses lately, notably in terms of achievable rate-distortion performances. However, a significant performance gap still remains when compared to prediction-based video coding schemes such as H.264/AVC. This is mainly due to the non-ideal exploitation of the video sequence temporal correlation properties during the generation of side information (SI). In fact, the decoder side motion estimation provides only an approximation of the true motion. In this paper, a progressive DVC architecture is proposed, which exploits the spatial correlation of the video frames to improve the motion-compensated temporal interpolation (MCTI). Specifically, Wyner-Ziv (WZ) frames are divided into several spatially correlated groups that are then sent progressively to the receiver. SI refinement (SIR) is performed as long as these groups are being decoded, thus providing more accurate SI for the next groups. It is shown that the proposed progressive SIR method leads to significant improvements over the Discover DVC codec as well as other SIR schemes recently introduced in the literature.
A robust coding scheme for packet video
NASA Technical Reports Server (NTRS)
Chen, Y. C.; Sayood, Khalid; Nelson, D. J.
1991-01-01
We present a layered packet video coding algorithm based on a progressive transmission scheme. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.
A robust coding scheme for packet video
NASA Technical Reports Server (NTRS)
Chen, Yun-Chung; Sayood, Khalid; Nelson, Don J.
1992-01-01
A layered packet video coding algorithm based on a progressive transmission scheme is presented. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.
Gold Sample Heating within the TEMPUS Electromagnetic Levitation Furnace
NASA Technical Reports Server (NTRS)
2003-01-01
A gold sample is heated by the TEMPUS electromagnetic levitation furnace on STS-94, 1997, MET:10/09:20 (approximate). The sequence shows the sample being positioned electromagnetically and starting to be heated to melting. TEMPUS (stands for Tiegelfreies Elektromagnetisches Prozessiere unter Schwerelosigkeit (containerless electromagnetic processing under weightlessness). It was developed by the German Space Agency (DARA) for flight aboard Spacelab. The DARA project scientist was Igon Egry. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1R mission (STS-94, July 1-17 1997). DARA and NASA are exploring the possibility of flying an advanced version of TEMPUS on the International Space Station. (460KB, 14-second MPEG, screen 160 x 120 pixels; downlinked video, higher quality not available) A still JPG composite of this movie is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300190.html.
First Digit Law and Its Application to Digital Forensics
NASA Astrophysics Data System (ADS)
Shi, Yun Q.
Digital data forensics, which gathers evidence of data composition, origin, and history, is crucial in our digital world. Although this new research field is still in its infancy stage, it has started to attract increasing attention from the multimedia-security research community. This lecture addresses the first digit law and its applications to digital forensics. First, the Benford and generalized Benford laws, referred to as first digit law, are introduced. Then, the application of first digit law to detection of JPEG compression history for a given BMP image and detection of double JPEG compressions are presented. Finally, applying first digit law to detection of double MPEG video compressions is discussed. It is expected that the first digit law may play an active role in other task of digital forensics. The lesson learned is that statistical models play an important role in digital forensics and for a specific forensic task different models may provide different performance.
Coding visual features extracted from video sequences.
Baroffio, Luca; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano
2014-05-01
Visual features are successfully exploited in several applications (e.g., visual search, object recognition and tracking, etc.) due to their ability to efficiently represent image content. Several visual analysis tasks require features to be transmitted over a bandwidth-limited network, thus calling for coding techniques to reduce the required bit budget, while attaining a target level of efficiency. In this paper, we propose, for the first time, a coding architecture designed for local features (e.g., SIFT, SURF) extracted from video sequences. To achieve high coding efficiency, we exploit both spatial and temporal redundancy by means of intraframe and interframe coding modes. In addition, we propose a coding mode decision based on rate-distortion optimization. The proposed coding scheme can be conveniently adopted to implement the analyze-then-compress (ATC) paradigm in the context of visual sensor networks. That is, sets of visual features are extracted from video frames, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast to the traditional compress-then-analyze (CTA) paradigm, in which video sequences acquired at a node are compressed and then sent to a central unit for further processing. In this paper, we compare these coding paradigms using metrics that are routinely adopted to evaluate the suitability of visual features in the context of content-based retrieval, object recognition, and tracking. Experimental results demonstrate that, thanks to the significant coding gains achieved by the proposed coding scheme, ATC outperforms CTA with respect to all evaluation metrics.
NASA Astrophysics Data System (ADS)
Barnett, Barry S.; Bovik, Alan C.
1995-04-01
This paper presents a real time full motion video conferencing system based on the Visual Pattern Image Sequence Coding (VPISC) software codec. The prototype system hardware is comprised of two personal computers, two camcorders, two frame grabbers, and an ethernet connection. The prototype system software has a simple structure. It runs under the Disk Operating System, and includes a user interface, a video I/O interface, an event driven network interface, and a free running or frame synchronous video codec that also acts as the controller for the video and network interfaces. Two video coders have been tested in this system. Simple implementations of Visual Pattern Image Coding and VPISC have both proven to support full motion video conferencing with good visual quality. Future work will concentrate on expanding this prototype to support the motion compensated version of VPISC, as well as encompassing point-to-point modem I/O and multiple network protocols. The application will be ported to multiple hardware platforms and operating systems. The motivation for developing this prototype system is to demonstrate the practicality of software based real time video codecs. Furthermore, software video codecs are not only cheaper, but are more flexible system solutions because they enable different computer platforms to exchange encoded video information without requiring on-board protocol compatible video codex hardware. Software based solutions enable true low cost video conferencing that fits the `open systems' model of interoperability that is so important for building portable hardware and software applications.
Panayides, Andreas; Antoniou, Zinonas C; Mylonas, Yiannos; Pattichis, Marios S; Pitsillides, Andreas; Pattichis, Constantinos S
2013-05-01
In this study, we describe an effective video communication framework for the wireless transmission of H.264/AVC medical ultrasound video over mobile WiMAX networks. Medical ultrasound video is encoded using diagnostically-driven, error resilient encoding, where quantization levels are varied as a function of the diagnostic significance of each image region. We demonstrate how our proposed system allows for the transmission of high-resolution clinical video that is encoded at the clinical acquisition resolution and can then be decoded with low-delay. To validate performance, we perform OPNET simulations of mobile WiMAX Medium Access Control (MAC) and Physical (PHY) layers characteristics that include service prioritization classes, different modulation and coding schemes, fading channels conditions, and mobility. We encode the medical ultrasound videos at the 4CIF (704 × 576) resolution that can accommodate clinical acquisition that is typically performed at lower resolutions. Video quality assessment is based on both clinical (subjective) and objective evaluations.
Public online information about tinnitus: A cross-sectional study of YouTube videos.
Basch, Corey H; Yin, Jingjing; Kollia, Betty; Adedokun, Adeyemi; Trusty, Stephanie; Yeboah, Felicia; Fung, Isaac Chun-Hai
2018-01-01
To examine the information about tinnitus contained in different video sources on YouTube. The 100 most widely viewed tinnitus videos were manually coded. Firstly, we identified the sources of upload: consumer, professional, television-based clip, and internet-based clip. Secondly, the videos were analyzed to ascertain what pertinent information they contained from a current National Institute on Deafness and Other Communication Disorders fact sheet. Of the videos, 42 were consumer-generated, 33 from media, and 25 from professionals. Collectively, the 100 videos were viewed almost 9 million times. The odds of mentioning "objective tinnitus" in professional videos were 9.58 times those from media sources [odds ratio (OR) = 9.58; 95% confidence interval (CI): 1.94, 47.42; P = 0.01], whereas these odds in consumer videos were 51% of media-generated videos (OR = 0.51; 95% CI: 0.20, 1.29; P = 0.16). The odds that the purpose of a video was to sell a product or service were nearly the same for both consumer and professional videos. Consumer videos were found to be 4.33 times as likely to carry a theme about an individual's own experience with tinnitus (OR = 4.33; 95% CI: 1.62, 11.63; P = 0.004) as media videos. Of the top 100 viewed videos on tinnitus, most were uploaded by consumers, sharing individuals' experiences. Actions are needed to make scientific medical information more prominently available and accessible on YouTube and other social media.
Public Online Information About Tinnitus: A Cross-Sectional Study of YouTube Videos
Basch, Corey H.; Yin, Jingjing; Kollia, Betty; Adedokun, Adeyemi; Trusty, Stephanie; Yeboah, Felicia; Fung, Isaac Chun-Hai
2018-01-01
Purpose: To examine the information about tinnitus contained in different video sources on YouTube. Materials and Methods: The 100 most widely viewed tinnitus videos were manually coded. Firstly, we identified the sources of upload: consumer, professional, television-based clip, and internet-based clip. Secondly, the videos were analyzed to ascertain what pertinent information they contained from a current National Institute on Deafness and Other Communication Disorders fact sheet. Results: Of the videos, 42 were consumer-generated, 33 from media, and 25 from professionals. Collectively, the 100 videos were viewed almost 9 million times. The odds of mentioning “objective tinnitus” in professional videos were 9.58 times those from media sources [odds ratio (OR) = 9.58; 95% confidence interval (CI): 1.94, 47.42; P = 0.01], whereas these odds in consumer videos were 51% of media-generated videos (OR = 0.51; 95% CI: 0.20, 1.29; P = 0.16). The odds that the purpose of a video was to sell a product or service were nearly the same for both consumer and professional videos. Consumer videos were found to be 4.33 times as likely to carry a theme about an individual’s own experience with tinnitus (OR = 4.33; 95% CI: 1.62, 11.63; P = 0.004) as media videos. Conclusions: Of the top 100 viewed videos on tinnitus, most were uploaded by consumers, sharing individuals’ experiences. Actions are needed to make scientific medical information more prominently available and accessible on YouTube and other social media. PMID:29457600
Frohlich, Dennis Owen; Zmyslinski-Seelig, Anne
2012-01-01
The purpose of this study was to explore the types of social support messages YouTube users posted on medical videos. Specifically, the study compared messages posted on inflammatory bowel disease-related videos and ostomy-related videos. Additionally, the study analyzed the differences in social support messages posted on lay-created videos and professionally-created videos. Conducting a content analysis, the researchers unitized the comments on each video; the total number of thought units amounted to 5,960. Researchers coded each thought unit through the use of a coding scheme modified from a previous study. YouTube users posted informational support messages most frequently (65.1%), followed by emotional support messages (18.3%), and finally, instrumental support messages (8.2%).
Extensions under development for the HEVC standard
NASA Astrophysics Data System (ADS)
Sullivan, Gary J.
2013-09-01
This paper discusses standardization activities for extending the capabilities of the High Efficiency Video Coding (HEVC) standard - the first edition of which was completed in early 2013. These near-term extensions are focused on three areas: range extensions (such as enhanced chroma formats, monochrome video, and increased bit depth), bitstream scalability extensions for spatial and fidelity scalability, and 3D video extensions (including stereoscopic/multi-view coding, and probably also depth map coding and combinations thereof). Standardization extensions on each of these topics will be completed by mid-2014, and further work beyond that timeframe is also discussed.
Implementation issues in source coding
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Chen, Yun-Chung; Hadenfeldt, A. C.
1989-01-01
An edge preserving image coding scheme which can be operated in both a lossy and a lossless manner was developed. The technique is an extension of the lossless encoding algorithm developed for the Mars observer spectral data. It can also be viewed as a modification of the DPCM algorithm. A packet video simulator was also developed from an existing modified packet network simulator. The coding scheme for this system is a modification of the mixture block coding (MBC) scheme described in the last report. Coding algorithms for packet video were also investigated.
The NonCommissioned Officer Professional Development Study. Volume 2. Appendices
1986-02-01
ism) SP t.o" .R (4010"r M #steeI 1 SIP code) PART I *ALL. PUBLICATIONS (EXCEPT PSTL AND SC4M) AND &LANK PORNS ICM PA@s PARA. LmaNI FIGURE ?Ait...network. Simply stated, it is a televised training media usin* satellites. The SOA mission is to examine the use and effectiveness of video -teletraining...of a video disc player under microprocessor control and TV monitor type display, and uses a variety ofinput/output devices to permit delivery of
Bari, Attia; Khan, Rehan Ahmed; Jabeen, Uzma; Rathore, Ahsan Waheed
2017-01-01
Objective: To analyze communication skills of pediatric postgraduate residents in clinical encounter by using video recordings. Methods: This qualitative exploratory research was conducted through video recording at The Children’s Hospital Lahore, Pakistan. Residents who had attended the mandatory communication skills workshop offered by CPSP were included. The video recording of clinical encounter was done by a trained audiovisual person while the resident was interacting with the patient in the clinical encounter. Data was analyzed by thematic analysis. Results: Initially on open coding 36 codes emerged and then through axial and selective coding these were condensed to 17 subthemes. Out of these four main themes emerged: (1) Courteous and polite attitude, (2) Marginal nonverbal communication skills, (3) Power game/Ignoring child participation and (4) Patient as medical object/Instrumental behaviour. All residents treated the patient as a medical object to reach a right diagnosis and ignored them as a human being. There was dominant role of doctors and marginal nonverbal communication skills were displayed by the residents in the form of lack of social touch, and appropriate eye contact due to documenting notes. A brief non-medical interaction for rapport building at the beginning of interaction was missing and there was lack of child involvement. Conclusion: Paediatric postgraduate residents were polite while communicating with parents and child but lacking in good nonverbal communication skills. Communication pattern in our study was mostly one-way showing doctor’s instrumental behaviour and ignoring the child participation. PMID:29492050
Portrayal of Smokeless Tobacco in YouTube Videos
Augustson, Erik M.; Backinger, Cathy L.
2012-01-01
Objectives: Videos of smokeless tobacco (ST) on YouTube are abundant and easily accessible, yet no studies have examined the content of ST videos. This study assesses the overall portrayal, genre, and messages of ST YouTube videos. Methods: In August 2010, researchers identified the top 20 search results on YouTube by “relevance” and “view count” for the following search terms: “ST,” “chewing tobacco,” “snus,” and “Skoal.” After eliminating videos that were not about ST (n = 26), non-English (n = 14), or duplicate (n = 42), a final sample of 78 unique videos was coded for overall portrayal, genre, and various content measures. Results: Among the 78 unique videos, 15.4% were anti-ST, while 74.4% were pro-ST. Researchers were unable to determine the portrayal of ST in the remaining 10.3% of videos because they involved excessive or “sensationalized” use of the ST, which could be interpreted either positively or negatively, depending on the viewer. The most common ST genre was positive video diaries (or “vlogs”), which made up almost one third of the videos (29.5%), followed by promotional advertisements (20.5%) and anti-ST public service announcements (12.8%). While YouTube is intended for user-generated content, 23.1% of the videos were created by professional organizations. Conclusions: These results demonstrate that ST videos on YouTube are overwhelmingly pro-ST. More research is needed to determine who is viewing these ST YouTube videos and how they may affect people’s knowledge, attitudes, and behaviors regarding ST use. PMID:22080585
Indexing, Browsing, and Searching of Digital Video.
ERIC Educational Resources Information Center
Smeaton, Alan F.
2004-01-01
Presents a literature review that covers the following topics related to indexing, browsing, and searching of digital video: video coding and standards; conventional approaches to accessing digital video; automatically structuring and indexing digital video; searching, browsing, and summarization; measurement and evaluation of the effectiveness of…
NASA Astrophysics Data System (ADS)
Lei, Ted Chih-Wei; Tseng, Fan-Shuo
2017-07-01
This paper addresses the problem of high-computational complexity decoding in traditional Wyner-Ziv video coding (WZVC). The key focus is the migration of two traditionally high-computationally complex encoder algorithms, namely motion estimation and mode decision. In order to reduce the computational burden in this process, the proposed architecture adopts the partial boundary matching algorithm and four flexible types of block mode decision at the decoder. This approach does away with the need for motion estimation and mode decision at the encoder. The experimental results show that the proposed padding block-based WZVC not only decreases decoder complexity to approximately one hundredth that of the state-of-the-art DISCOVER decoding but also outperforms DISCOVER codec by up to 3 to 4 dB.
Open Source Subtitle Editor Software Study for Section 508 Close Caption Applications
NASA Technical Reports Server (NTRS)
Murphy, F. Brandon
2013-01-01
This paper will focus on a specific item within the NASA Electronic Information Accessibility Policy - Multimedia Presentation shall have synchronized caption; thus making information accessible to a person with hearing impairment. This synchronized caption will assist a person with hearing or cognitive disability to access the same information as everyone else. This paper focuses on the research and implementation for CC (subtitle option) support to video multimedia. The goal of this research is identify the best available open-source (free) software to achieve synchronized captions requirement and achieve savings, while meeting the security requirement for Government information integrity and assurance. CC and subtitling are processes that display text within a video to provide additional or interpretive information for those whom may need it or those whom chose it. Closed captions typically show the transcription of the audio portion of a program (video) as it occurs (either verbatim or in its edited form), sometimes including non-speech elements (such as sound effects). The transcript can be provided by a third party source or can be extracted word for word from the video. This feature can be made available for videos in two forms: either Soft-Coded or Hard-Coded. Soft-Coded is the more optional version of CC, where you can chose to turn them on if you want, or you can turn them off. Most of the time, when using the Soft-Coded option, the transcript is also provided to the view along-side the video. This option is subject to compromise, whereas the transcript is merely a text file that can be changed by anyone who has access to it. With this option the integrity of the CC is at the mercy of the user. Hard-Coded CC is a more permanent form of CC. A Hard-Coded CC transcript is embedded within a video, without the option of removal.
NASA Astrophysics Data System (ADS)
Banks, David; Wiley, Anthony; Catania, Nicolas; Coles, Alastair N.; Smith, Duncan; Baynham, Simon; Deliot, Eric; Chidzey, Rod
1998-02-01
In this paper we describe the work being done at HP Labs Bristol in the area of home networks and gateways. This work is based on the idea of breaking open the set top box by physically separating the access network specific functions from the application specific functions. The access network specific functions reside in an access network gateway that can be shared by many end user devices. The first section of the paper present the philosophy behind this approach. The end user devices and the access network gateways must be interconnected by a high bandwidth network which can offer a bounded delay service for delay sensitive traffic. We are advocating the use of IEEE 1394 for this network, and the next section of the paper gives a brief introduction to this technology. We then describe a prototype digital video broadcasting satellite compliant gateway that we have built. This gateway could be used, for example, by a PC for receiving a data service or by a digital TV for receiving an MPEG-2 video service. A control architecture is the presented which uses a PC application to provide a web based user interface to the system. Finally, we provide details of our work on extending the reach of IEEE 1394 and its standardization status.
A streaming-based solution for remote visualization of 3D graphics on mobile devices.
Lamberti, Fabrizio; Sanna, Andrea
2007-01-01
Mobile devices such as Personal Digital Assistants, Tablet PCs, and cellular phones have greatly enhanced user capability to connect to remote resources. Although a large set of applications are now available bridging the gap between desktop and mobile devices, visualization of complex 3D models is still a task hard to accomplish without specialized hardware. This paper proposes a system where a cluster of PCs, equipped with accelerated graphics cards managed by the Chromium software, is able to handle remote visualization sessions based on MPEG video streaming involving complex 3D models. The proposed framework allows mobile devices such as smart phones, Personal Digital Assistants (PDAs), and Tablet PCs to visualize objects consisting of millions of textured polygons and voxels at a frame rate of 30 fps or more depending on hardware resources at the server side and on multimedia capabilities at the client side. The server is able to concurrently manage multiple clients computing a video stream for each one; resolution and quality of each stream is tailored according to screen resolution and bandwidth of the client. The paper investigates in depth issues related to latency time, bit rate and quality of the generated stream, screen resolutions, as well as frames per second displayed.
Bridging the semantic gap in sports
NASA Astrophysics Data System (ADS)
Li, Baoxin; Errico, James; Pan, Hao; Sezan, M. Ibrahim
2003-01-01
One of the major challenges facing current media management systems and the related applications is the so-called "semantic gap" between the rich meaning that a user desires and the shallowness of the content descriptions that are automatically extracted from the media. In this paper, we address the problem of bridging this gap in the sports domain. We propose a general framework for indexing and summarizing sports broadcast programs. The framework is based on a high-level model of sports broadcast video using the concept of an event, defined according to domain-specific knowledge for different types of sports. Within this general framework, we develop automatic event detection algorithms that are based on automatic analysis of the visual and aural signals in the media. We have successfully applied the event detection algorithms to different types of sports including American football, baseball, Japanese sumo wrestling, and soccer. Event modeling and detection contribute to the reduction of the semantic gap by providing rudimentary semantic information obtained through media analysis. We further propose a novel approach, which makes use of independently generated rich textual metadata, to fill the gap completely through synchronization of the information-laden textual data with the basic event segments. An MPEG-7 compliant prototype browsing system has been implemented to demonstrate semantic retrieval and summarization of sports video.
Xanthan gum stabilized PEGylated gold nanoparticles for improved delivery of curcumin in cancer
NASA Astrophysics Data System (ADS)
Swami Muddineti, Omkara; Kumari, Preeti; Ajjarapu, Srinivas; Manish Lakhani, Prit; Bahl, Rishabh; Ghosh, Balaram; Biswas, Swati
2016-08-01
In recent years, gold nanoparticles (AuNPs) have received immense interest in various biomedical applications including drug delivery, photothermal ablation of cancer and imaging agent for cancer diagnosis. However, the synthesis of AuNPs poses challenges due to the poor reproducibility and stability of the colloidal system. In the present work, we developed a one step, facile procedure for the synthesis of AuNPs from hydrogen tetrachloroaurate (III) hydrate (HAuCl4. 3H2O) by using ascorbic acid and xanthan gum (XG) as reducing agent and stabilizer, respectively. The effect of concentrations of HAuCl4, 3H2O, ascorbic acid and methoxy polyethylene glycol-thiol (mPEG800-SH) were optimized and it was observed that stable AuNPs were formed at concentrations of 0.25 mM, 50 μM and 1 mM for HAuCl4.3H2O, ascorbic acid, and mPEG800-SH, respectively. The XG stabilized, deep red wine colored AuNPs (XG-AuNPs) were obtained by drop-wise addition of aqueous solution of ascorbic acid (50 mM) and XG (1.5 mg ml-1). Synthesized XG-AuNPs showed λmax at 540 nm and a mean hydrodynamic diameter of 80 ± 3 nm. PEGylation was performed with mPEG800-SH to obtain PEGylated XG-AuNPs (PX-AuNPs) and confirmed by Ellman’s assay. No significant shift observed in λmax and hydrodynamic diameter between XG-AuNPs and PX-AuNPs. Colloidal stability of PX-AuNPs was studied in normal saline, buffers within a pH range of 1.2-7.4, DMEM complete medium and in normal storage condition at 4 ˚C. Further, water soluble curcumin was prepared using PVP-K30 as solid dispersion and loaded on to PX-AuNPs (CPX-AuNPs), and evaluated for cellular uptake and cytotoxicity in Murine melanoma (B16F10) cells. Time and concentration dependent studies using CPX-AuNPs showed efficient uptake and decreased cell viability compared to free curcumin.
Video Transmission for Third Generation Wireless Communication Systems
Gharavi, H.; Alamouti, S. M.
2001-01-01
This paper presents a twin-class unequal protected video transmission system over wireless channels. Video partitioning based on a separation of the Variable Length Coded (VLC) Discrete Cosine Transform (DCT) coefficients within each block is considered for constant bitrate transmission (CBR). In the splitting process the fraction of bits assigned to each of the two partitions is adjusted according to the requirements of the unequal error protection scheme employed. Subsequently, partitioning is applied to the ITU-T H.263 coding standard. As a transport vehicle, we have considered one of the leading third generation cellular radio standards known as WCDMA. A dual-priority transmission system is then invoked on the WCDMA system where the video data, after being broken into two streams, is unequally protected. We use a very simple error correction coding scheme for illustration and then propose more sophisticated forms of unequal protection of the digitized video signals. We show that this strategy results in a significantly higher quality of the reconstructed video data when it is transmitted over time-varying multipath fading channels. PMID:27500033
Grubbs, Kathleen M; Fortney, John C; Dean, Tisha; Williams, James S; Godleski, Linda
2015-07-01
This study compares the mental health diagnoses of encounters delivered face to face and via interactive video in the Veterans Healthcare Administration (VHA). We compiled 1 year of national-level VHA administrative data for Fiscal Year 2012 (FY12). Mental health encounters were those with both a VHA Mental Health Stop Code and a Mental Health Diagnosis (n=11,906,114). Interactive video encounters were identified as those with a Mental Health Stop Code, paired with a VHA Telehealth Secondary Stop Code. Primary diagnoses were grouped into posttraumatic stress disorder (PTSD), depression, anxiety, bipolar disorder, psychosis, drug use, alcohol use, and other. In FY12, 1.5% of all mental health encounters were delivered via interactive video. Compared with face-to-face encounters, a larger percentage of interactive video encounters was for PTSD, depression, and anxiety, whereas a smaller percentage was for alcohol use, drug use, or psychosis. Providers and patients may feel more comfortable treating depression and anxiety disorders than substance use or psychosis via interactive video.
Guo, Fuqiang; Shang, Jiajia; Zhao, Hai; Lai, Kangrong; Li, Yang; Fan, Zhongxiong; Hou, Zhenqing; Su, Guanghao
2017-12-01
As one of nanomedicine delivery systems (NDSs), drug nanocrystals exhibited an excellent anticancer effect. Recently, differences of internalization mechanisms and subcellular localization of both drug nanocrystals and small molecular free drug have drawn much attention. In this paper, paclitaxel (PTX) as a model anticancer drug was directly labeled with 4-chloro-7-nitro-1, 2, 3-benzoxadiazole (NBD-Cl) (a drug-fluorophore conjugate Ma et al. (2016) and Wang et al. (2016) [1,2] (PTX-NBD)). PTX-NBD was synthesized by nucleophilic substitution reaction of PTX with NBD-Cl in high yield and characterized by fluorescence, XRD, ESI-MS, and FT-IR analysis. Subsequently, the cube-shaped PTX-NBD nanocrystals were prepared with an antisolvent method followed by surface functionalization of SPC and MPEG-DSPE. The obtained specific shaped PTX-NBD@PC-PEG NCs had a hydrodynamic particle size of ∼50nm, excellent colloidal stability, and a high drug-loading content of ∼64%. Moreover, in comparison with free PTX-NBD and the sphere-shaped PTX-NBD nanocrystals with surface functionalization of SPC and MPEG-DSPE (PTX-NBD@PC-PEG NSs), PTX-NBD@PC-PEG NCs remarkably reduced burst release and improved cellular uptake efficiency and in vitro cancer cell killing ability. In a word, the work highlights the potential of theranostic prodrug nanocrystals based on the drug-fluorophore conjugates for cell imaging and chemotherapy. Copyright © 2017 Elsevier B.V. All rights reserved.
Zhang, Wei; Du, Zhiping; Chang, Chien-Hsiang; Wang, Guoyong
2009-09-15
The comb-like surfactants, poly(styrene-co-maleic anhydride)-g-(poly(ethylene glycol) monomethyl ether), poly(St-co-MA)-g-(MPEG), have been prepared using a macromonomer approach to get controlled molecular structures. The macromonomer (MAMPEG) was obtained by esterification of poly(ethylene glycol) monomethyl ether with maleic anhydride. Poly(St-co-MA)-g-(MPEG) with various molar ratios of St to MAMPEG (R) were then constructed by radical copolymerization. The comb-like structures of the surfactants were confirmed by infrared and (1)H nuclear magnetic resonance spectroscopy. It is found from gel permeation chromatography characterization that the molecular weight of the surfactants increases as R increases. The polydispersity index was in the range between 1.4 and 2.0 in all the cases. The surfactants with a higher St percentage are less soluble in water due to aggregation. The value of critical aggregation concentration (CAC) and the surface tension at the CAC (gamma(CAC)) decrease as R increases. The steady-shear measurements show that the surfactant solutions at 50 g/L are dilatant fluids. In addition, it appears that there are two break points in the viscosity-shear rate curve. Both break points increase with increasing R. It can therefore be concluded that the properties of comb-like surfactants poly(St-co-MA)-g-(MPEG) are related to molecular structure. The results demonstrate that the properties of these comb-like surfactants can be tailored through appropriate molecular design.
Scene-aware joint global and local homographic video coding
NASA Astrophysics Data System (ADS)
Peng, Xiulian; Xu, Jizheng; Sullivan, Gary J.
2016-09-01
Perspective motion is commonly represented in video content that is captured and compressed for various applications including cloud gaming, vehicle and aerial monitoring, etc. Existing approaches based on an eight-parameter homography motion model cannot deal with this efficiently, either due to low prediction accuracy or excessive bit rate overhead. In this paper, we consider the camera motion model and scene structure in such video content and propose a joint global and local homography motion coding approach for video with perspective motion. The camera motion is estimated by a computer vision approach, and camera intrinsic and extrinsic parameters are globally coded at the frame level. The scene is modeled as piece-wise planes, and three plane parameters are coded at the block level. Fast gradient-based approaches are employed to search for the plane parameters for each block region. In this way, improved prediction accuracy and low bit costs are achieved. Experimental results based on the HEVC test model show that up to 9.1% bit rate savings can be achieved (with equal PSNR quality) on test video content with perspective motion. Test sequences for the example applications showed a bit rate savings ranging from 3.7 to 9.1%.
Low-complexity video encoding method for wireless image transmission in capsule endoscope.
Takizawa, Kenichi; Hamaguchi, Kiyoshi
2010-01-01
This paper presents a low-complexity video encoding method applicable for wireless image transmission in capsule endoscopes. This encoding method is based on Wyner-Ziv theory, in which side information available at a transmitter is treated as side information at its receiver. Therefore complex processes in video encoding, such as estimation of the motion vector, are moved to the receiver side, which has a larger-capacity battery. As a result, the encoding process is only to decimate coded original data through channel coding. We provide a performance evaluation for a low-density parity check (LDPC) coding method in the AWGN channel.
Improved Techniques for Video Compression and Communication
ERIC Educational Resources Information Center
Chen, Haoming
2016-01-01
Video compression and communication has been an important field over the past decades and critical for many applications, e.g., video on demand, video-conferencing, and remote education. In many applications, providing low-delay and error-resilient video transmission and increasing the coding efficiency are two major challenges. Low-delay and…
Low-complexity transcoding algorithm from H.264/AVC to SVC using data mining
NASA Astrophysics Data System (ADS)
Garrido-Cantos, Rosario; De Cock, Jan; Martínez, Jose Luis; Van Leuven, Sebastian; Cuenca, Pedro; Garrido, Antonio
2013-12-01
Nowadays, networks and terminals with diverse characteristics of bandwidth and capabilities coexist. To ensure a good quality of experience, this diverse environment demands adaptability of the video stream. In general, video contents are compressed to save storage capacity and to reduce the bandwidth required for its transmission. Therefore, if these compressed video streams were compressed using scalable video coding schemes, they would be able to adapt to those heterogeneous networks and a wide range of terminals. Since the majority of the multimedia contents are compressed using H.264/AVC, they cannot benefit from that scalability. This paper proposes a low-complexity algorithm to convert an H.264/AVC bitstream without scalability to scalable bitstreams with temporal scalability in baseline and main profiles by accelerating the mode decision task of the scalable video coding encoding stage using machine learning tools. The results show that when our technique is applied, the complexity is reduced by 87% while maintaining coding efficiency.
NASA Astrophysics Data System (ADS)
Tsifouti, A.; Triantaphillidou, S.; Larabi, M. C.; Doré, G.; Bilissi, E.; Psarrou, A.
2015-01-01
In this investigation we study the effects of compression and frame rate reduction on the performance of four video analytics (VA) systems utilizing a low complexity scenario, such as the Sterile Zone (SZ). Additionally, we identify the most influential scene parameters affecting the performance of these systems. The SZ scenario is a scene consisting of a fence, not to be trespassed, and an area with grass. The VA system needs to alarm when there is an intruder (attack) entering the scene. The work includes testing of the systems with uncompressed and compressed (using H.264/MPEG-4 AVC at 25 and 5 frames per second) footage, consisting of quantified scene parameters. The scene parameters include descriptions of scene contrast, camera to subject distance, and attack portrayal. Additional footage, including only distractions (no attacks) is also investigated. Results have shown that every system has performed differently for each compression/frame rate level, whilst overall, compression has not adversely affected the performance of the systems. Frame rate reduction has decreased performance and scene parameters have influenced the behavior of the systems differently. Most false alarms were triggered with a distraction clip, including abrupt shadows through the fence. Findings could contribute to the improvement of VA systems.
On scalable lossless video coding based on sub-pixel accurate MCTF
NASA Astrophysics Data System (ADS)
Yea, Sehoon; Pearlman, William A.
2006-01-01
We propose two approaches to scalable lossless coding of motion video. They achieve SNR-scalable bitstream up to lossless reconstruction based upon the subpixel-accurate MCTF-based wavelet video coding. The first approach is based upon a two-stage encoding strategy where a lossy reconstruction layer is augmented by a following residual layer in order to obtain (nearly) lossless reconstruction. The key advantages of our approach include an 'on-the-fly' determination of bit budget distribution between the lossy and the residual layers, freedom to use almost any progressive lossy video coding scheme as the first layer and an added feature of near-lossless compression. The second approach capitalizes on the fact that we can maintain the invertibility of MCTF with an arbitrary sub-pixel accuracy even in the presence of an extra truncation step for lossless reconstruction thanks to the lifting implementation. Experimental results show that the proposed schemes achieve compression ratios not obtainable by intra-frame coders such as Motion JPEG-2000 thanks to their inter-frame coding nature. Also they are shown to outperform the state-of-the-art non-scalable inter-frame coder H.264 (JM) lossless mode, with the added benefit of bitstream embeddedness.
Novel Integration of Frame Rate Up Conversion and HEVC Coding Based on Rate-Distortion Optimization.
Guo Lu; Xiaoyun Zhang; Li Chen; Zhiyong Gao
2018-02-01
Frame rate up conversion (FRUC) can improve the visual quality by interpolating new intermediate frames. However, high frame rate videos by FRUC are confronted with more bitrate consumption or annoying artifacts of interpolated frames. In this paper, a novel integration framework of FRUC and high efficiency video coding (HEVC) is proposed based on rate-distortion optimization, and the interpolated frames can be reconstructed at encoder side with low bitrate cost and high visual quality. First, joint motion estimation (JME) algorithm is proposed to obtain robust motion vectors, which are shared between FRUC and video coding. What's more, JME is embedded into the coding loop and employs the original motion search strategy in HEVC coding. Then, the frame interpolation is formulated as a rate-distortion optimization problem, where both the coding bitrate consumption and visual quality are taken into account. Due to the absence of original frames, the distortion model for interpolated frames is established according to the motion vector reliability and coding quantization error. Experimental results demonstrate that the proposed framework can achieve 21% ~ 42% reduction in BDBR, when compared with the traditional methods of FRUC cascaded with coding.
Intra prediction using face continuity in 360-degree video coding
NASA Astrophysics Data System (ADS)
Hanhart, Philippe; He, Yuwen; Ye, Yan
2017-09-01
This paper presents a new reference sample derivation method for intra prediction in 360-degree video coding. Unlike the conventional reference sample derivation method for 2D video coding, which uses the samples located directly above and on the left of the current block, the proposed method considers the spherical nature of 360-degree video when deriving reference samples located outside the current face to which the block belongs, and derives reference samples that are geometric neighbors on the sphere. The proposed reference sample derivation method was implemented in the Joint Exploration Model 3.0 (JEM-3.0) for the cubemap projection format. Simulation results for the all intra configuration show that, when compared with the conventional reference sample derivation method, the proposed method gives, on average, luma BD-rate reduction of 0.3% in terms of the weighted spherical PSNR (WS-PSNR) and spherical PSNR (SPSNR) metrics.
NASA Technical Reports Server (NTRS)
Sayood, K.; Chen, Y. C.; Wang, X.
1992-01-01
During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.
A study on multiresolution lossless video coding using inter/intra frame adaptive prediction
NASA Astrophysics Data System (ADS)
Nakachi, Takayuki; Sawabe, Tomoko; Fujii, Tetsuro
2003-06-01
Lossless video coding is required in the fields of archiving and editing digital cinema or digital broadcasting contents. This paper combines a discrete wavelet transform and adaptive inter/intra-frame prediction in the wavelet transform domain to create multiresolution lossless video coding. The multiresolution structure offered by the wavelet transform facilitates interchange among several video source formats such as Super High Definition (SHD) images, HDTV, SDTV, and mobile applications. Adaptive inter/intra-frame prediction is an extension of JPEG-LS, a state-of-the-art lossless still image compression standard. Based on the image statistics of the wavelet transform domains in successive frames, inter/intra frame adaptive prediction is applied to the appropriate wavelet transform domain. This adaptation offers superior compression performance. This is achieved with low computational cost and no increase in additional information. Experiments on digital cinema test sequences confirm the effectiveness of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio
2014-02-01
High Efficiency Video Coding (HEVC), the latest video compression standard (also known as H.265), can deliver video streams of comparable quality to the current H.264 Advanced Video Coding (H.264/AVC) standard with a 50% reduction in bandwidth. Research into SHVC, the scalable extension to the HEVC standard, is still in its infancy. One important area for investigation is whether, given the greater compression ratio of HEVC (and SHVC), the loss of packets containing video content will have a greater impact on the quality of delivered video than is the case with H.264/AVC or its scalable extension H.264/SVC. In this work we empirically evaluate the layer-based, in-network adaptation of video streams encoded using SHVC in situations where dynamically changing bandwidths and datagram loss ratios require the real-time adaptation of video streams. Through the use of extensive experimentation, we establish a comprehensive set of benchmarks for SHVC-based highdefinition video streaming in loss prone network environments such as those commonly found in mobile networks. Among other results, we highlight that packet losses of only 1% can lead to a substantial reduction in PSNR of over 3dB and error propagation in over 130 pictures following the one in which the loss occurred. This work would be one of the earliest studies in this cutting-edge area that reports benchmark evaluation results for the effects of datagram loss on SHVC picture quality and offers empirical and analytical insights into SHVC adaptation to lossy, mobile networking conditions.
Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video
NASA Astrophysics Data System (ADS)
Li, Honggui
2017-09-01
This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.
Spherical rotation orientation indication for HEVC and JEM coding of 360 degree video
NASA Astrophysics Data System (ADS)
Boyce, Jill; Xu, Qian
2017-09-01
Omnidirectional (or "360 degree") video, representing a panoramic view of a spherical 360° ×180° scene, can be encoded using conventional video compression standards, once it has been projection mapped to a 2D rectangular format. Equirectangular projection format is currently used for mapping 360 degree video to a rectangular representation for coding using HEVC/JEM. However, video in the top and bottom regions of the image, corresponding to the "north pole" and "south pole" of the spherical representation, is significantly warped. We propose to perform spherical rotation of the input video prior to HEVC/JEM encoding in order to improve the coding efficiency, and to signal parameters in a supplemental enhancement information (SEI) message that describe the inverse rotation process recommended to be applied following HEVC/JEM decoding, prior to display. Experiment results show that up to 17.8% bitrate gain (using the WS-PSNR end-to-end metric) can be achieved for the Chairlift sequence using HM16.15 and 11.9% gain using JEM6.0, and an average gain of 2.9% for HM16.15 and 2.2% for JEM6.0.
Robustness evaluation of transactional audio watermarking systems
NASA Astrophysics Data System (ADS)
Neubauer, Christian; Steinebach, Martin; Siebenhaar, Frank; Pickel, Joerg
2003-06-01
Distribution via Internet is of increasing importance. Easy access, transmission and consumption of digitally represented music is very attractive to the consumer but led also directly to an increasing problem of illegal copying. To cope with this problem watermarking is a promising concept since it provides a useful mechanism to track illicit copies by persistently attaching property rights information to the material. Especially for online music distribution the use of so-called transaction watermarking, also denoted with the term bitstream watermarking, is beneficial since it offers the opportunity to embed watermarks directly into perceptually encoded material without the need of full decompression/compression. Besides the concept of bitstream watermarking, former publications presented the complexity, the audio quality and the detection performance. These results are now extended by an assessment of the robustness of such schemes. The detection performance before and after applying selected attacks is presented for MPEG-1/2 Layer 3 (MP3) and MPEG-2/4 AAC bitstream watermarking, contrasted to the performance of PCM spread spectrum watermarking.
ERIC Educational Resources Information Center
Garneli, Varvara; Chorianopoulos, Konstantinos
2018-01-01
Various aspects of computational thinking (CT) could be supported by educational contexts such as simulations and video-games construction. In this field study, potential differences in student motivation and learning were empirically examined through students' code. For this purpose, we performed a teaching intervention that took place over five…
Nieto-Orellana, Alejandro; Coghlan, David; Rothery, Malcolm; Falcone, Franco H; Bosquillon, Cynthia; Childerhouse, Nick; Mantovani, Giuseppe; Stolnik, Snow
2018-04-05
Pulmonary delivery of protein therapeutics has considerable clinical potential for treating both local and systemic diseases. However, poor protein conformational stability, immunogenicity and protein degradation by proteolytic enzymes in the lung are major challenges to overcome for the development of effective therapeutics. To address these, a family of structurally related copolymers comprising polyethylene glycol, mPEG 2k , and poly(glutamic acid) with linear A-B (mPEG 2k -lin-GA) and miktoarm A-B 3 (mPEG 2k -mik-(GA) 3 ) macromolecular architectures was investigated as potential protein stabilisers. These copolymers form non-covalent nanocomplexes with a model protein (lysozyme) which can be formulated into dry powders by spray-drying using common aerosol excipients (mannitol, trehalose and leucine). Powder formulations with excellent aerodynamic properties (fine particle fraction of up to 68%) were obtained with particle size (D 50 ) in the 2.5 µm range, low moisture content (<5%), and high glass transitions temperatures, i.e. formulation attributes all suitable for inhalation application. In aqueous medium, dry powders rapidly disintegrated into the original polymer-protein nanocomplexes which provided protection towards proteolytic degradation. Taken together, the present study shows that dry powders based on (mPEG 2k -polyGA)-protein nanocomplexes possess potentials as an inhalation delivery system. Copyright © 2018 Elsevier B.V. All rights reserved.
Tan, Qinggang; Chu, Yanyan; Bie, Min; Wang, Zihao; Xu, Xiaoyan
2017-02-16
Biopolymer/inorganic material nanocomposites have attracted increasing interest as nanocarriers for delivering drugs owing to the combined advantages of both biopolymer and inorganic materials. Here, amphiphilic block copolymer/fullerene nanocomposites were prepared as nanocarriers for hydrophobic drug by incorporation of C60 in the core of methoxy polyethylene glycol-poly(d,l-lactic acid) (MPEG-PDLLA) micelles. The structure and morphology of MPEG-PDLLA/C60 nanocomposites were characterized using transmission electron microscopy, dynamic light scattering, high-resolution transmission electron microscopy, and thermal gravimetric analysis. It was found that the moderate amount of spherical C60 incorporated in the MPEG-PDLLA micelles may cause an increase in the molecular chain space of PDLLA segments in the vicinity of C60 and, thus, produce a larger cargo space to increase drug entrapment and accelerate the drug release from nanocomposites. Furthermore, sufficient additions of C60 perhaps resulted in an aggregation of C60 within the micelles that decreased the drug entrapment and produced a steric hindrance for DOX released from the nanocomposites. The results obtained provide fundamental insights into the understanding of the role of C60 in adjusting the drug loading and release of amphiphilic copolymer micelles and further demonstrate the future potential of the MPEG-PDLLA/C60 nanocomposites used as nanocarriers for controlled drug-delivery applications.
Wichitnithad, Wisut; Nimmannit, Ubonthip; Callery, Patrick S; Rojsitthisak, Pornchai
2011-12-01
We investigated the effects of different carboxylic ester spacers of mono-PEGylated curcumin conjugates on chemical stability, release characteristics, and anticancer activity. Three novel conjugates were synthesized with succinic acid, glutaric acid, and methylcarboxylic acid as the respective spacers between curcumin and monomethoxy polyethylene glycol of molecular weight 2000 (mPEG(2000) ): mPEG(2000) -succinyl-curcumin (PSC), mPEG(2000) -glutaryl-curcumin (PGC), and mPEG(2000) -methylcarboxyl-curcumin (PMC), respectively. Hydrolysis of all conjugates in buffer and human plasma followed pseudo first-order kinetics. In phosphate buffer, the overall degradation rate constant and half-life values indicated an order of stability of PGC > PSC > PMC > curcumin. In human plasma, more than 90% of curcumin was released from the esters after incubation for 0.25, 1.5, and 2 h, respectively. All conjugates exhibited cytotoxicity against four human cancer cell lines: Caco-2 (colon), KB (oral cavity), MCF7 (breast), and NCI-H187 (lung) with half maximal inhibitory concentration (IC(50) ) values in the range of 1-6 µM, similar to that observed for curcumin itself. Our results suggest that mono-PEGylation of curcumin produces prodrugs that are stable in buffer at physiological pH, release curcumin readily in human plasma, and show anticancer activity. Copyright © 2011 Wiley-Liss, Inc.
Collision count in rugby union: A comparison of micro-technology and video analysis methods.
Reardon, Cillian; Tobin, Daniel P; Tierney, Peter; Delahunt, Eamonn
2017-10-01
The aim of our study was to determine if there is a role for manipulation of g force thresholds acquired via micro-technology for accurately detecting collisions in rugby union. In total, 36 players were recruited from an elite Guinness Pro12 rugby union team. Player movement profiles and collisions were acquired via individual global positioning system (GPS) micro-technology units. Players were assigned to a sub-category of positions in order to determine positional collision demands. The coding of collisions by micro-technology at g force thresholds between 2 and 5.5 g (0.5 g increments) was compared with collision coding by an expert video analyst using Bland-Altman assessments. The most appropriate g force threshold (smallest mean difference compared with video analyst coding) was lower for all forwards positions (2.5 g) than for all backs positions (3.5 g). The Bland-Altman 95% limits of agreement indicated that there may be a substantial over- or underestimation of collisions coded via GPS micro-technology when using expert video analyst coding as the reference comparator. The manipulation of the g force thresholds applied to data acquired by GPS micro-technology units based on incremental thresholds of 0.5 g does not provide a reliable tool for the accurate coding of collisions in rugby union. Future research should aim to investigate smaller g force threshold increments and determine the events that cause coding of false positives.
Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding
Xiao, Rui; Gao, Junbin; Bossomaier, Terry
2016-01-01
A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102
A Robust Model-Based Coding Technique for Ultrasound Video
NASA Technical Reports Server (NTRS)
Docef, Alen; Smith, Mark J. T.
1995-01-01
This paper introduces a new approach to coding ultrasound video, the intended application being very low bit rate coding for transmission over low cost phone lines. The method exploits both the characteristic noise and the quasi-periodic nature of the signal. Data compression ratios between 250:1 and 1000:1 are shown to be possible, which is sufficient for transmission over ISDN and conventional phone lines. Preliminary results show this approach to be promising for remote ultrasound examinations.
Video Coding and the Application Level Framing Protocol Architecture
1992-06-01
missing ADU can be sent to the decoder when and if it arrives. The need for out-of- order processing arises for two reasons. First, ADUs may be reordered...by the network. Second, an ADU which is lost and then successfully retransmitted will arrive out of order. In either case, out-of- order processing makes...the code do not allow at least some out-of- order processing , one of the strong points of the ALF architecture is eliminated. 2.3.4 Header Data
NASA Astrophysics Data System (ADS)
Kurceren, Ragip; Modestino, James W.
1998-12-01
The use of forward error-control (FEC) coding, possibly in conjunction with ARQ techniques, has emerged as a promising approach for video transport over ATM networks for cell-loss recovery and/or bit error correction, such as might be required for wireless links. Although FEC provides cell-loss recovery capabilities it also introduces transmission overhead which can possibly cause additional cell losses. A methodology is described to maximize the number of video sources multiplexed at a given quality of service (QoS), measured in terms of decoded cell loss probability, using interlaced FEC codes. The transport channel is modelled as a block interference channel (BIC) and the multiplexer as single server, deterministic service, finite buffer supporting N users. Based upon an information-theoretic characterization of the BIC and large deviation bounds on the buffer overflow probability, the described methodology provides theoretically achievable upper limits on the number of sources multiplexed. Performance of specific coding techniques using interlaced nonbinary Reed-Solomon (RS) codes and binary rate-compatible punctured convolutional (RCPC) codes is illustrated.
Mirarchi, Ferdinando L; Cooney, Timothy E; Venkat, Arvind; Wang, David; Pope, Thaddeus M; Fant, Abra L; Terman, Stanley A; Klauer, Kevin M; Williams-Murphy, Monica; Gisondi, Michael A; Clemency, Brian; Doshi, Ankur A; Siegel, Mari; Kraemer, Mary S; Aberger, Kate; Harman, Stephanie; Ahuja, Neera; Carlson, Jestin N; Milliron, Melody L; Hart, Kristopher K; Gilbertson, Chelsey D; Wilson, Jason W; Mueller, Larissa; Brown, Lori; Gordon, Bradley D
2017-06-01
End-of-life interventions should be predicated on consensus understanding of patient wishes. Written documents are not always understood; adding a video testimonial/message (VM) might improve clarity. Goals of this study were to (1) determine baseline rates of consensus in assigning code status and resuscitation decisions in critically ill scenarios and (2) determine whether adding a VM increases consensus. We randomly assigned 2 web-based survey links to 1366 faculty and resident physicians at institutions with graduate medical education programs in emergency medicine, family practice, and internal medicine. Each survey asked for code status interpretation of stand-alone Physician Orders for Life-Sustaining Treatment (POLST) and living will (LW) documents in 9 scenarios. Respondents assigned code status and resuscitation decisions to each scenario. For 1 of 2 surveys, a VM was included to help clarify patient wishes. Response rate was 54%, and most were male emergency physicians who lacked formal advanced planning document interpretation training. Consensus was not achievable for stand-alone POLST or LW documents (68%-78% noted "DNR"). Two of 9 scenarios attained consensus for code status (97%-98% responses) and treatment decisions (96%-99%). Adding a VM significantly changed code status responses by 9% to 62% (P ≤ 0.026) in 7 of 9 scenarios with 4 achieving consensus. Resuscitation responses changed by 7% to 57% (P ≤ 0.005) with 4 of 9 achieving consensus with VMs. For most scenarios, consensus was not attained for code status and resuscitation decisions with stand-alone LW and POLST documents. Adding VMs produced significant impacts toward achieving interpretive consensus.
Facilitation and Teacher Behaviors: An Analysis of Literacy Teachers' Video-Case Discussions
ERIC Educational Resources Information Center
Arya, Poonam; Christ, Tanya; Chiu, Ming Ming
2014-01-01
This study explored how peer and professor facilitations are related to teachers' behaviors during video-case discussions. Fourteen inservice teachers produced 1,787 turns of conversation during 12 video-case discussions that were video-recorded, transcribed, coded, and analyzed with statistical discourse analysis. Professor facilitations (sharing…
Meng, Fan-Tao; Zhang, Wan-Zhong; Ma, Guang-Hui; Su, Zhi-Guo
2003-08-01
Methoxypoly(ethylene glycol)-b-poly-DL-lactide (PELA) microcapsules containing bovine hemoglobin (bHb) were prepared by a W/O/W double emulsion-solvent diffusion process. bHb solution was used as the internal aqueous phase, PELA/organic solvent as the oil phase, and polyvinyl alcohol (PVA) solution as the external aqueous phase. This W/O/W double emulsion was added into a large volume of water (solidification solution) to allow organic solvent to diffuse into water. The optimum preparative condition for PELA microcapsules loaded with bovine hemoglobin was investigated. It was found that homogenization rate, type of organic solvent, and volume of the solidification solution influenced the activity of bovine hemoglobin encapsulated. When the homogenization rate was lower than 9000 rpm and ethyl acetate was used as the organic solvent, there was no significant influence on the activity of hemoglobin. High homogenization rate as 12 000 rpm decreased the P50 and Hill coefficient. Increasing the volume of solidification solution had an effect of improving the activity of microencapsulated hemoglobin. The composition of the PELA had the most important influence on the success of encapsulation. Microcapsules fabricated by PELA with MPEG2k block (molecular weight of MPEG block: 2000) achieved a high entrapment efficiency of 90%, better than PL A homopolymer and PELA with MPEG5k blocks. Hemoglobin microcapsules with native loading oxygen activity (P50 = 26.0 mmHg, Hill coefficient = 2.4), mean size of about 10 microm, and high entrapment efficiency (ca. 93%) were obtained at the optimum condition.
Dynamic full-scalability conversion in scalable video coding
NASA Astrophysics Data System (ADS)
Lee, Dong Su; Bae, Tae Meon; Thang, Truong Cong; Ro, Yong Man
2007-02-01
For outstanding coding efficiency with scalability functions, SVC (Scalable Video Coding) is being standardized. SVC can support spatial, temporal and SNR scalability and these scalabilities are useful to provide a smooth video streaming service even in a time varying network such as a mobile environment. But current SVC is insufficient to support dynamic video conversion with scalability, thereby the adaptation of bitrate to meet a fluctuating network condition is limited. In this paper, we propose dynamic full-scalability conversion methods for QoS adaptive video streaming in SVC. To accomplish full scalability dynamic conversion, we develop corresponding bitstream extraction, encoding and decoding schemes. At the encoder, we insert the IDR NAL periodically to solve the problems of spatial scalability conversion. At the extractor, we analyze the SVC bitstream to get the information which enable dynamic extraction. Real time extraction is achieved by using this information. Finally, we develop the decoder so that it can manage the changing scalability. Experimental results showed that dynamic full-scalability conversion was verified and it was necessary for time varying network condition.
NASA Astrophysics Data System (ADS)
Bezan, Scott; Shirani, Shahram
2006-12-01
To reliably transmit video over error-prone channels, the data should be both source and channel coded. When multiple channels are available for transmission, the problem extends to that of partitioning the data across these channels. The condition of transmission channels, however, varies with time. Therefore, the error protection added to the data at one instant of time may not be optimal at the next. In this paper, we propose a method for adaptively adding error correction code in a rate-distortion (RD) optimized manner using rate-compatible punctured convolutional codes to an MJPEG2000 constant rate-coded frame of video. We perform an analysis on the rate-distortion tradeoff of each of the coding units (tiles and packets) in each frame and adapt the error correction code assigned to the unit taking into account the bandwidth and error characteristics of the channels. This method is applied to both single and multiple time-varying channel environments. We compare our method with a basic protection method in which data is either not transmitted, transmitted with no protection, or transmitted with a fixed amount of protection. Simulation results show promising performance for our proposed method.
NASA Astrophysics Data System (ADS)
Bartolini, Franco; Pasquini, Cristina; Piva, Alessandro
2001-04-01
The recent development of video compression algorithms allowed the diffusion of systems for the transmission of video sequences over data networks. However, the transmission over error prone mobile communication channels is yet an open issue. In this paper, a system developed for the real time transmission of H263 video coded sequences over TETRA mobile networks is presented. TETRA is an open digital trunked radio standard defined by the European Telecommunications Standardization Institute developed for professional mobile radio users, providing full integration of voice and data services. Experimental tests demonstrate that, in spite of the low frame rate allowed by the SW only implementation of the decoder and by the low channel rate a video compression technique such as that complying with the H263 standard, is still preferable to a simpler but less effective frame based compression system.
Study of efficient video compression algorithms for space shuttle applications
NASA Technical Reports Server (NTRS)
Poo, Z.
1975-01-01
Results are presented of a study on video data compression techniques applicable to space flight communication. This study is directed towards monochrome (black and white) picture communication with special emphasis on feasibility of hardware implementation. The primary factors for such a communication system in space flight application are: picture quality, system reliability, power comsumption, and hardware weight. In terms of hardware implementation, these are directly related to hardware complexity, effectiveness of the hardware algorithm, immunity of the source code to channel noise, and data transmission rate (or transmission bandwidth). A system is recommended, and its hardware requirement summarized. Simulations of the study were performed on the improved LIM video controller which is computer-controlled by the META-4 CPU.
High Resolution, High Frame Rate Video Technology
NASA Technical Reports Server (NTRS)
1990-01-01
Papers and working group summaries presented at the High Resolution, High Frame Rate Video (HHV) Workshop are compiled. HHV system is intended for future use on the Space Shuttle and Space Station Freedom. The Workshop was held for the dual purpose of: (1) allowing potential scientific users to assess the utility of the proposed system for monitoring microgravity science experiments; and (2) letting technical experts from industry recommend improvements to the proposed near-term HHV system. The following topics are covered: (1) State of the art in the video system performance; (2) Development plan for the HHV system; (3) Advanced technology for image gathering, coding, and processing; (4) Data compression applied to HHV; (5) Data transmission networks; and (6) Results of the users' requirements survey conducted by NASA.
Video streaming with SHVC to HEVC transcoding
NASA Astrophysics Data System (ADS)
Gudumasu, Srinivas; He, Yuwen; Ye, Yan; Xiu, Xiaoyu
2015-09-01
This paper proposes an efficient Scalable High efficiency Video Coding (SHVC) to High Efficiency Video Coding (HEVC) transcoder, which can reduce the transcoding complexity significantly, and provide a desired trade-off between the transcoding complexity and the transcoded video quality. To reduce the transcoding complexity, some of coding information, such as coding unit (CU) depth, prediction mode, merge mode, motion vector information, intra direction information and transform unit (TU) depth information, in the SHVC bitstream are mapped and transcoded to single layer HEVC bitstream. One major difficulty in transcoding arises when trying to reuse the motion information from SHVC bitstream since motion vectors referring to inter-layer reference (ILR) pictures cannot be reused directly in transcoding. Reusing motion information obtained from ILR pictures for those prediction units (PUs) will reduce the complexity of the SHVC transcoder greatly but a significant reduction in the quality of the picture is observed. Pictures corresponding to the intra refresh pictures in the base layer (BL) will be coded as P pictures in enhancement layer (EL) in the SHVC bitstream; and directly reusing the intra information from the BL for transcoding will not get a good coding efficiency. To solve these problems, various transcoding technologies are proposed. The proposed technologies offer different trade-offs between transcoding speed and transcoding quality. They are implemented on the basis of reference software SHM-6.0 and HM-14.0 for the two layer spatial scalability configuration. Simulations show that the proposed SHVC software transcoder reduces the transcoding complexity by up to 98-99% using low complexity transcoding mode when compared with cascaded re-encoding method. The transcoder performance at various bitrates with different transcoding modes are compared in terms of transcoding speed and transcoded video quality.
HEVC for high dynamic range services
NASA Astrophysics Data System (ADS)
Kim, Seung-Hwan; Zhao, Jie; Misra, Kiran; Segall, Andrew
2015-09-01
Displays capable of showing a greater range of luminance values can render content containing high dynamic range information in a way such that the viewers have a more immersive experience. This paper introduces the design aspects of a high dynamic range (HDR) system, and examines the performance of the HDR processing chain in terms of compression efficiency. Specifically it examines the relation between recently introduced Society of Motion Picture and Television Engineers (SMPTE) ST 2084 transfer function and the High Efficiency Video Coding (HEVC) standard. SMPTE ST 2084 is designed to cover the full range of an HDR signal from 0 to 10,000 nits, however in many situations the valid signal range of actual video might be smaller than SMPTE ST 2084 supported range. The above restricted signal range results in restricted range of code values for input video data and adversely impacts compression efficiency. In this paper, we propose a code value remapping method that extends the restricted range code values into the full range code values so that the existing standards such as HEVC may better compress the video content. The paper also identifies related non-normative encoder-only changes that are required for remapping method for a fair comparison with anchor. Results are presented comparing the efficiency of the current approach versus the proposed remapping method for HM-16.2.
NASA Astrophysics Data System (ADS)
Chan, Chia-Hsin; Tu, Chun-Chuan; Tsai, Wen-Jiin
2017-01-01
High efficiency video coding (HEVC) not only improves the coding efficiency drastically compared to the well-known H.264/AVC but also introduces coding tools for parallel processing, one of which is tiles. Tile partitioning is allowed to be arbitrary in HEVC, but how to decide tile boundaries remains an open issue. An adaptive tile boundary (ATB) method is proposed to select a better tile partitioning to improve load balancing (ATB-LoadB) and coding efficiency (ATB-Gain) with a unified scheme. Experimental results show that, compared to ordinary uniform-space partitioning, the proposed ATB can save up to 17.65% of encoding times in parallel encoding scenarios and can reduce up to 0.8% of total bit rates for coding efficiency.
NASA Astrophysics Data System (ADS)
Sanchez, Gustavo; Marcon, César; Agostini, Luciano Volcan
2018-01-01
The 3D-high efficiency video coding has introduced tools to obtain higher efficiency in 3-D video coding, and most of them are related to the depth maps coding. Among these tools, the depth modeling mode-1 (DMM-1) focuses on better encoding edges regions of depth maps. The large memory required for storing all wedgelet patterns is one of the bottlenecks in the DMM-1 hardware design of both encoder and decoder since many patterns must be stored. Three algorithms to reduce the DMM-1 memory requirements and a hardware design targeting the most efficient among these algorithms are presented. Experimental results demonstrate that the proposed solutions surpass related works reducing up to 78.8% of the wedgelet memory, without degrading the encoding efficiency. Synthesis results demonstrate that the proposed algorithm reduces almost 75% of the power dissipation when compared to the standard approach.
Adaptive Modulation Approach for Robust MPEG-4 AAC Encoded Audio Transmission
2011-11-01
as shown in Table 1. Table 1 specifies the perceptual interpretation of the ODG. Subjective Difference Grade ( SDG ) = Grade Signal under test... SDG using human hearing and cognitive model [8], [9]. Freely available PEAQ basic model, “PQevalAudio,” is used in this paper which is available as...PEAQ-ODG Score [6] Impairment ITU-R Five Grade Impairment Scale SDG /PEAQ-ODG Score Imperceptible 5.00 0.00 Perceptible, but not Annoying 4.00
NASA Astrophysics Data System (ADS)
Zhao, Haiwu; Wang, Guozhong; Hou, Gang
2005-07-01
AVS is a new digital audio-video coding standard established by China. AVS will be used in digital TV broadcasting and next general optical disk. AVS adopted many digital audio-video coding techniques developed by Chinese company and universities in recent years, it has very low complexity compared to H.264, and AVS will charge very low royalty fee through one-step license including all AVS tools. So AVS is a good and competitive candidate for Chinese DTV and next generation optical disk. In addition, Chinese government has published a plan for satellite TV signal directly to home(DTH) and a telecommunication satellite named as SINO 2 will be launched in 2006. AVS will be also one of the best hopeful candidates of audio-video coding standard on satellite signal transmission.
Multicore-based 3D-DWT video encoder
NASA Astrophysics Data System (ADS)
Galiano, Vicente; López-Granado, Otoniel; Malumbres, Manuel P.; Migallón, Hector
2013-12-01
Three-dimensional wavelet transform (3D-DWT) encoders are good candidates for applications like professional video editing, video surveillance, multi-spectral satellite imaging, etc. where a frame must be reconstructed as quickly as possible. In this paper, we present a new 3D-DWT video encoder based on a fast run-length coding engine. Furthermore, we present several multicore optimizations to speed-up the 3D-DWT computation. An exhaustive evaluation of the proposed encoder (3D-GOP-RL) has been performed, and we have compared the evaluation results with other video encoders in terms of rate/distortion (R/D), coding/decoding delay, and memory consumption. Results show that the proposed encoder obtains good R/D results for high-resolution video sequences with nearly in-place computation using only the memory needed to store a group of pictures. After applying the multicore optimization strategies over the 3D DWT, the proposed encoder is able to compress a full high-definition video sequence in real-time.
Subjective evaluation of H.265/HEVC based dynamic adaptive video streaming over HTTP (HEVC-DASH)
NASA Astrophysics Data System (ADS)
Irondi, Iheanyi; Wang, Qi; Grecos, Christos
2015-02-01
The Dynamic Adaptive Streaming over HTTP (DASH) standard is becoming increasingly popular for real-time adaptive HTTP streaming of internet video in response to unstable network conditions. Integration of DASH streaming techniques with the new H.265/HEVC video coding standard is a promising area of research. The performance of HEVC-DASH systems has been previously evaluated by a few researchers using objective metrics, however subjective evaluation would provide a better measure of the user's Quality of Experience (QoE) and overall performance of the system. This paper presents a subjective evaluation of an HEVC-DASH system implemented in a hardware testbed. Previous studies in this area have focused on using the current H.264/AVC (Advanced Video Coding) or H.264/SVC (Scalable Video Coding) codecs and moreover, there has been no established standard test procedure for the subjective evaluation of DASH adaptive streaming. In this paper, we define a test plan for HEVC-DASH with a carefully justified data set employing longer video sequences that would be sufficient to demonstrate the bitrate switching operations in response to various network condition patterns. We evaluate the end user's real-time QoE online by investigating the perceived impact of delay, different packet loss rates, fluctuating bandwidth, and the perceived quality of using different DASH video stream segment sizes on a video streaming session using different video sequences. The Mean Opinion Score (MOS) results give an insight into the performance of the system and expectation of the users. The results from this study show the impact of different network impairments and different video segments on users' QoE and further analysis and study may help in optimizing system performance.
NASA Astrophysics Data System (ADS)
Rundle, J.; Rundle, P.; Donnellan, A.; Li, P.
2003-12-01
We consider the problem of the complex dynamics of earthquake fault systems, and whether numerical simulations can be used to define an ensemble forecasting technology similar to that used in weather and climate research. To effectively carry out such a program, we need 1) a topological realistic model to simulate the fault system; 2) data sets to constrain the model parameters through a systematic program of data assimilation; 3) a computational technology making use of modern paradigms of high performance and parallel computing systems; and 4) software to visualize and analyze the results. In particular, we focus attention of a new version of our code Virtual California (version 2001) in which we model all of the major strike slip faults extending throughout California, from the Mexico-California border to the Mendocino Triple Junction. We use the historic data set of earthquakes larger than magnitude M > 6 to define the frictional properties of all 654 fault segments (degrees of freedom) in the model. Previous versions of Virtual California had used only 215 fault segments to model the strike slip faults in southern California. To compute the dynamics and the associated surface deformation, we use message passing as implemented in the MPICH standard distribution on a small Beowulf cluster consisting of 10 cpus. We are also planning to run the code on significantly larger machines so that we can begin to examine much finer spatial scales of resolution, and to assess scaling properties of the code. We present results of simulations both as static images and as mpeg movies, so that the dynamical aspects of the computation can be assessed by the viewer. We also compute a variety of statistics from the simulations, including magnitude-frequency relations, and compare these with data from real fault systems.
Perceptual video quality assessment in H.264 video coding standard using objective modeling.
Karthikeyan, Ramasamy; Sainarayanan, Gopalakrishnan; Deepa, Subramaniam Nachimuthu
2014-01-01
Since usage of digital video is wide spread nowadays, quality considerations have become essential, and industry demand for video quality measurement is rising. This proposal provides a method of perceptual quality assessment in H.264 standard encoder using objective modeling. For this purpose, quality impairments are calculated and a model is developed to compute the perceptual video quality metric based on no reference method. Because of the shuttle difference between the original video and the encoded video the quality of the encoded picture gets degraded, this quality difference is introduced by the encoding process like Intra and Inter prediction. The proposed model takes into account of the artifacts introduced by these spatial and temporal activities in the hybrid block based coding methods and an objective modeling of these artifacts into subjective quality estimation is proposed. The proposed model calculates the objective quality metric using subjective impairments; blockiness, blur and jerkiness compared to the existing bitrate only calculation defined in the ITU G 1070 model. The accuracy of the proposed perceptual video quality metrics is compared against popular full reference objective methods as defined by VQEG.
Clack, Lauren; Scotoni, Manuela; Wolfensberger, Aline; Sax, Hugo
2017-01-01
Healthcare workers' hands are the foremost means of pathogen transmission in healthcare, but detailed hand trajectories have been insufficiently researched so far. We developed and applied a new method to systematically document hand-to-surface exposures (HSE) to delineate true hand transmission pathways in real-life healthcare settings. A head-mounted camera and commercial coding software were used to capture ten active care episodes by eight nurses and two physicians and code HSE type and duration using a hierarchical coding scheme. We identified HSE sequences of particular relevance to infectious risks for patients based on the WHO 'Five Moments for Hand Hygiene'. The study took place in a trauma intensive care unit in a 900-bed university hospital in Switzerland. Overall, the ten videos totaled 296.5 min and featured eight nurses and two physicians. A total of 4222 HSE were identified (1 HSE every 4.2 s), which concerned bare (79%) and gloved (21%) hands. The HSE inside the patient zone ( n = 1775; 42%) included mobile objects (33%), immobile surfaces (5%), and patient intact skin (4%), while HSE outside the patient zone ( n = 1953; 46%) included HCW's own body (10%), mobile objects (28%), and immobile surfaces (8%). A further 494 (12%) events involved patient critical sites. Sequential analysis revealed 291 HSE transitions from outside to inside patient zone, i.e. "colonization events", and 217 from any surface to critical sites, i.e. "infection events". Hand hygiene occurred 97 times, 14 (5% adherence) times at colonization events and three (1% adherence) times at infection events. On average, hand rubbing lasted 13 ± 9 s. The abundance of HSE underscores the central role of hands in the spread of potential pathogens while hand hygiene occurred rarely at potential colonization and infection events. Our approach produced a valid video and coding instrument for in-depth analysis of hand trajectories during active patient care that may help to design more efficient prevention schemes.
NASA Technical Reports Server (NTRS)
2003-01-01
The Structure of Flameballs at Low Lewis Numbers (SOFBALL) experiments aboard the space shuttle in 1997 a series of sturningly successful burns. This sequence was taken during STS-94, July 12, 1997, MET:10/08:18 (approximate). It was thought these extremely dim flameballs (1/20 the power of a kitchen match) could last up to 200 seconds -- in fact, they can last for at least 500 seconds. This has ramifications in fuel-spray design in combustion engines, as well as fire safety in space. The SOFBALL principal investigator was Paul Ronney, University of Southern California, Los Angeles. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1R mission (STS-94, July 1-17 1997). Advanced combustion experiments will be a part of investigations planned for the International Space Station. (925KB, 9-second MPEG spanning 10 minutes, screen 320 x 240 pixels; downlinked video, higher quality not available) A still JPG composite of this movie is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300186.html.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroefl, Ch.; Gruber, M.; Plank, J., E-mail: sekretariat@bauchemie.ch.tum.de
2012-11-15
UHPC is fluidized particularly well when a blend of MPEG- and APEG-type PCEs is applied. Here, the mechanism for this behavior was investigated. Testing individual cement and micro silica pastes revealed that the MPEG-PCE disperses cement better than silica whereas the APEG-PCE fluidizes silica particularly well. This behavior is explained by preferential adsorption of APEG-PCE on silica while MPEG-PCEs exhibit a more balanced affinity to both cement and silica. Adsorption data obtained from individual cement and micro silica pastes were compared with those found for the fully formulated UHPC containing a cement/silica blend. In the UHPC formulation, both PCEs stillmore » exhibit preferential and selective adsorption similar as was observed for individual cement and silica pastes. Preferential adsorption of PCEs is explained by their different stereochemistry whereby the carboxylate groups have to match with the steric position of calcium ions/atoms situated at the surfaces of cement hydrates or silica.« less
Jones, Mathew W; Mantovani, Giuseppe; Blindauer, Claudia A; Ryan, Sinead M; Wang, Xuexuan; Brayden, David J; Haddleton, David M
2012-05-02
Direct polymer conjugation at peptide tyrosine residues is described. In this study Tyr residues of both leucine enkephalin and salmon calcitonin (sCT) were targeted using appropriate diazonium salt-terminated linear monomethoxy poly(ethylene glycol)s (mPEGs) and poly(mPEG) methacrylate prepared by atom transfer radical polymerization. Judicious choice of the reaction conditions-pH, stoichiometry, and chemical structure of diazonium salt-led to a high degree of site-specificity in the conjugation reaction, even in the presence of competitive peptide amino acid targets such as histidine, lysines, and N-terminal amine. In vitro studies showed that conjugation of mPEG(2000) to sCT did not affect the peptide's ability to increase intracellular cAMP induced in T47D human breast cancer cells bearing sCT receptors. Preliminary in vivo investigation showed preserved ability to reduce [Ca(2+)] plasma levels by mPEG(2000)-sCT conjugate in rat animal models. © 2012 American Chemical Society
Collaborative Movie Annotation
NASA Astrophysics Data System (ADS)
Zad, Damon Daylamani; Agius, Harry
In this paper, we focus on metadata for self-created movies like those found on YouTube and Google Video, the duration of which are increasing in line with falling upload restrictions. While simple tags may have been sufficient for most purposes for traditionally very short video footage that contains a relatively small amount of semantic content, this is not the case for movies of longer duration which embody more intricate semantics. Creating metadata is a time-consuming process that takes a great deal of individual effort; however, this effort can be greatly reduced by harnessing the power of Web 2.0 communities to create, update and maintain it. Consequently, we consider the annotation of movies within Web 2.0 environments, such that users create and share that metadata collaboratively and propose an architecture for collaborative movie annotation. This architecture arises from the results of an empirical experiment where metadata creation tools, YouTube and an MPEG-7 modelling tool, were used by users to create movie metadata. The next section discusses related work in the areas of collaborative retrieval and tagging. Then, we describe the experiments that were undertaken on a sample of 50 users. Next, the results are presented which provide some insight into how users interact with existing tools and systems for annotating movies. Based on these results, the paper then develops an architecture for collaborative movie annotation.
Digital cinema system using JPEG2000 movie of 8-million pixel resolution
NASA Astrophysics Data System (ADS)
Fujii, Tatsuya; Nomura, Mitsuru; Shirai, Daisuke; Yamaguchi, Takahiro; Fujii, Tetsuro; Ono, Sadayasu
2003-05-01
We have developed a prototype digital cinema system that can store, transmit and display extra high quality movies of 8-million pixel resolution, using JPEG2000 coding algorithm. The image quality is 4 times better than HDTV in resolution, and enables us to replace conventional films with digital cinema archives. Using wide-area optical gigabit IP networks, cinema contents are distributed and played back as a video-on-demand (VoD) system. The system consists of three main devices, a video server, a real-time JPEG2000 decoder, and a large-venue LCD projector. All digital movie data are compressed by JPEG2000 and stored in advance. The coded streams of 300~500 Mbps can be continuously transmitted from the PC server using TCP/IP. The decoder can perform the real-time decompression at 24/48 frames per second, using 120 parallel JPEG2000 processing elements. The received streams are expanded into 4.5Gbps raw video signals. The prototype LCD projector uses 3 pieces of 3840×2048 pixel reflective LCD panels (D-ILA) to show RGB 30-bit color movies fed by the decoder. The brightness exceeds 3000 ANSI lumens for a 300-inch screen. The refresh rate is chosen to 96Hz to thoroughly eliminate flickers, while preserving compatibility to cinema movies of 24 frames per second.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilbur, Daniel Scott
This research is a collaborative effort between the research groups of the PIs, Dr. D. Scott Wilbur in the Department of Radiation Oncology at the University of Washington (UW) and Matthew O’Hara at the Pacific Northwest National Laboratory (PNNL). In this report only those studies conducted at UW and the budget information from UW will be reported. A separate progress and financial report will be provided by PNNL. This final report outlines the experiments (Tasks) conducted and results obtained at UW from July 1, 2013 thru June 30, 2016 (2-year project with 1 year no-cost extension). The report divides themore » information on the experiments and results obtained into the 5 specific objectives of the research efforts and the Tasks within those objectives. This format is used so that it is easy to see what has been accomplished in each area. A brief summary of the major findings from the studies is provided below. Summary of Major Findings from Research/Training Activities at UW: Anion and cation exchange columns did not provide adequate 211At capture and/or extraction results under conditions studied to warrant further evaluation; PEG-Merrifield resins containing mPEG350, mPEG750, mPEG2000 and mPEG5000 were synthesized and evaluated; All of the mPEG resins with different sized mPEG moieties conjugated gave similar 211At capture (>95%) from 8M HCl solutions and release with conc. NH 4OH (~50-80%), but very low quantities were released when NaOH was used as an eluent; Capture and release of 211At when loading [ 211At]astatate appeared to be similar to that of [ 211At]astatide on PEG columns, but further studies need to be conducted to confirm that; Capture of 211At on PEG columns was lower (e.g. 80-90%) from solutions of 8M HNO 3, but higher capture rates (e.g. 99%) can be obtained when 10M HNO 3 is mixed with an equal quantity of 8M HCl; Addition of reductants to the 211At solutions did not appear to change the percent capture, but may have an effect on the % extracted; There was some indication that the PEG-Merrifield resins could be saturated (perhaps with Bi) resulting in lower capture percentages, but more studies need to be done to confirm that; A target dissolution chamber, designed and built at PNNL, works well with syringe pumps so it can be used in an automated system; Preliminary semi-automated 211At isolation studies have been conducted with full-scale target dissolution and 211At isolation using a PEG column on the Hamilton automated system gave low overall recoveries, but HNO 3 was used (rather than HCl) for loading the 211At and flow rates were not optimized; Results obtained using PEG columns are high enough to warrant further development on a fully automated system; Results obtained also indicate that additional studies are warranted to evaluate other types of columns for 211At separation from bismuth, which allow use of HNO 3/HCl mixtures for loading and NaOH for eluting 211At. Such a column could greatly simplify the overall isolation process and make it easier to automate.« less
Silva, Adny H; Lima, Enio; Mansilla, Marcelo Vasquez; Zysler, Roberto D; Troiani, Horacio; Pisciotti, Mary Luz Mojica; Locatelli, Claudriana; Benech, Juan C; Oddone, Natalia; Zoldan, Vinícius C; Winter, Evelyn; Pasa, André A; Creczynski-Pasa, Tânia B
2016-05-01
Superparamagnetic iron oxide nanoparticles (SPIONS) were synthesized by thermal decomposition of an organometallic precursor at high temperature and coated with a bi-layer composed of oleic acid and methoxy-polyethylene glycol-phospholipid. The formulations were named SPION-PEG350 and SPION-PEG2000. Transmission electron microscopy, X-ray diffraction and magnetic measurements show that the SPIONs are near-spherical, well-crystalline, and have high saturation magnetization and susceptibility. FTIR spectroscopy identifies the presence of oleic acid and of the conjugates mPEG for each sample. In vitro biocompatibility of SPIONS was investigated using three cell lines; up to 100μg/ml SPION-PEG350 showed non-toxicity, while SPION-PEG2000 showed no signal of toxicity even up to 200μg/ml. The uptake of SPIONS was detected using magnetization measurement, confocal and atomic force microscopy. SPION-PEG2000 presented the highest internalization capacity, which should be correlated with the mPEG chain size. The in vivo results suggested that SPION-PEG2000 administration in mice triggered liver and kidney injury. The potential use of superparamagnetic iron oxide nanoparticles (SPIONS) in the clinical setting have been studied by many researchers. The authors synthesized two types of SPIONS here and investigated the physical properties and biological compatibility. The findings should provide more data on the design of SPIONS for clinical application in the future. Copyright © 2016 Elsevier Inc. All rights reserved.
MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.
Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño
2013-01-01
In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.
Comparison of H.265/HEVC encoders
NASA Astrophysics Data System (ADS)
Trochimiuk, Maciej
2016-09-01
The H.265/HEVC is the state-of-the-art video compression standard, which allows the bitrate reduction up to 50% compared with its predecessor, H.264/AVC, maintaining equal perceptual video quality. The growth in coding efficiency was achieved by increasing the number of available intra- and inter-frame prediction features and improvements in existing ones, such as entropy encoding and filtering. Nevertheless, to achieve real-time performance of the encoder, simplifications in algorithm are inevitable. Some features and coding modes shall be skipped, to reduce time needed to evaluate modes forwarded to rate-distortion optimisation. Thus, the potential acceleration of the encoding process comes at the expense of coding efficiency. In this paper, a trade-off between video quality and encoding speed of various H.265/HEVC encoders is discussed.
ERIC Educational Resources Information Center
Arya, Poonam; Christ, Tanya; Chiu, Ming
2015-01-01
This study examined how characteristics of Collaborative Peer Video Analysis (CPVA) events are related to teachers' pedagogical outcomes. Data included 39 transcribed literacy video events, in which 14 in-service teachers engaged in discussions of their video clips. Emergent coding and Statistical Discourse Analysis were used to analyze the data.…
Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks
NASA Astrophysics Data System (ADS)
Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.
2011-01-01
In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.
VLSI design of lossless frame recompression using multi-orientation prediction
NASA Astrophysics Data System (ADS)
Lee, Yu-Hsuan; You, Yi-Lun; Chen, Yi-Guo
2016-01-01
Pursuing an experience of high-end visual quality drives human to demand a higher display resolution and a higher frame rate. Hence, a lot of powerful coding tools are aggregated together in emerging video coding standards to improve coding efficiency. This also makes video coding standards suffer from two design challenges: heavy computation and tremendous memory bandwidth. The first issue can be properly solved by a careful hardware architecture design with advanced semiconductor processes. Nevertheless, the second one becomes a critical design bottleneck for a modern video coding system. In this article, a lossless frame recompression using multi-orientation prediction technique is proposed to overcome this bottleneck. This work is realised into a silicon chip with the technology of TSMC 0.18 µm CMOS process. Its encoding capability can reach full-HD (1920 × 1080)@48 fps. The chip power consumption is 17.31 mW@100 MHz. Core area and chip area are 0.83 × 0.83 mm2 and 1.20 × 1.20 mm2, respectively. Experiment results demonstrate that this work exhibits an outstanding performance on lossless compression ratio with a competitive hardware performance.
NASA Astrophysics Data System (ADS)
Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong
2017-07-01
The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.
"Life in the Universe" Final Event Video Now Available
NASA Astrophysics Data System (ADS)
2002-02-01
ESO Video Clip 01/02 is issued on the web in conjunction with the release of a 20-min documentary video from the Final Event of the "Life in the Universe" programme. This unique event took place in November 2001 at CERN in Geneva, as part of the 2001 European Science and Technology Week, an initiative by the European Commission to raise the public awareness of science in Europe. The "Life in the Universe" programme comprised competitions in 23 European countries to identify the best projects from school students. The projects could be scientific or a piece of art, a theatrical performance, poetry or even a musical performance. The only restriction was that the final work must be based on scientific evidence. Winning teams from each country were invited to a "Final Event" at CERN on 8-11 November, 2001 to present their projects to a panel of International Experts during a special three-day event devoted to understanding the possibility of other life forms existing in our Universe. This Final Event also included a spectacular 90-min webcast from CERN with the highlights of the programme. The video describes the Final Event and the enthusiastic atmosphere when more than 200 young students and teachers from all over Europe met with some of the world's leading scientific experts of the field. The present video clip, with excerpts from the film, is available in four versions: two MPEG files and two streamer-versions of different sizes; the latter require RealPlayer software. Video Clip 01/02 may be freely reproduced. The 20-min video is available on request from ESO, for viewing in VHS and, for broadcasters, in Betacam-SP format. Please contact the ESO EPR Department for more details. Life in the Universe was jointly organised by the European Organisation for Nuclear Research (CERN) , the European Space Agency (ESA) and the European Southern Observatory (ESO) , in co-operation with the European Association for Astronomy Education (EAAE). Other research organisations were associated with the programme, e.g., the European Molecular Biology Laboratory (EMBL) and the European Synchrotron Radiation Facility (ESRF). Detailed information about the "Life in the Universe" programme can be found at the website b>http://www.lifeinuniverse.org and a webcast of this 90-min closing session in one of the large experimental halls at CERN is available on the web via that page. Most of the ESO PR Video Clips at the ESO website provide "animated" illustrations of the ongoing work and events at the European Southern Observatory. The most recent clip was: ESO PR Video Clips 08a-b/01 about The Eagle's EGGs (20 December 2001) . General information is available on the web about ESO videos.
Dynamic code block size for JPEG 2000
NASA Astrophysics Data System (ADS)
Tsai, Ping-Sing; LeCornec, Yann
2008-02-01
Since the standardization of the JPEG 2000, it has found its way into many different applications such as DICOM (digital imaging and communication in medicine), satellite photography, military surveillance, digital cinema initiative, professional video cameras, and so on. The unified framework of the JPEG 2000 architecture makes practical high quality real-time compression possible even in video mode, i.e. motion JPEG 2000. In this paper, we present a study of the compression impact using dynamic code block size instead of fixed code block size as specified in the JPEG 2000 standard. The simulation results show that there is no significant impact on compression if dynamic code block sizes are used. In this study, we also unveil the advantages of using dynamic code block sizes.
A video coding scheme based on joint spatiotemporal and adaptive prediction.
Jiang, Wenfei; Latecki, Longin Jan; Liu, Wenyu; Liang, Hui; Gorman, Ken
2009-05-01
We propose a video coding scheme that departs from traditional Motion Estimation/DCT frameworks and instead uses Karhunen-Loeve Transform (KLT)/Joint Spatiotemporal Prediction framework. In particular, a novel approach that performs joint spatial and temporal prediction simultaneously is introduced. It bypasses the complex H.26x interframe techniques and it is less computationally intensive. Because of the advantage of the effective joint prediction and the image-dependent color space transformation (KLT), the proposed approach is demonstrated experimentally to consistently lead to improved video quality, and in many cases to better compression rates and improved computational speed.
Priority-based methods for reducing the impact of packet loss on HEVC encoded video streams
NASA Astrophysics Data System (ADS)
Nightingale, James; Wang, Qi; Grecos, Christos
2013-02-01
The rapid growth in the use of video streaming over IP networks has outstripped the rate at which new network infrastructure has been deployed. These bandwidth-hungry applications now comprise a significant part of all Internet traffic and present major challenges for network service providers. The situation is more acute in mobile networks where the available bandwidth is often limited. Work towards the standardisation of High Efficiency Video Coding (HEVC), the next generation video coding scheme, is currently on track for completion in 2013. HEVC offers the prospect of a 50% improvement in compression over the current H.264 Advanced Video Coding standard (H.264/AVC) for the same quality. However, there has been very little published research on HEVC streaming or the challenges of delivering HEVC streams in resource-constrained network environments. In this paper we consider the problem of adapting an HEVC encoded video stream to meet the bandwidth limitation in a mobile networks environment. Video sequences were encoded using the Test Model under Consideration (TMuC HM6) for HEVC. Network abstraction layers (NAL) units were packetized, on a one NAL unit per RTP packet basis, and transmitted over a realistic hybrid wired/wireless testbed configured with dynamically changing network path conditions and multiple independent network paths from the streamer to the client. Two different schemes for the prioritisation of RTP packets, based on the NAL units they contain, have been implemented and empirically compared using a range of video sequences, encoder configurations, bandwidths and network topologies. In the first prioritisation method the importance of an RTP packet was determined by the type of picture and the temporal switching point information carried in the NAL unit header. Packets containing parameter set NAL units and video coding layer (VCL) NAL units of the instantaneous decoder refresh (IDR) and the clean random access (CRA) pictures were given the highest priority followed by NAL units containing pictures used as reference pictures from which others can be predicted. The second method assigned a priority to each NAL unit based on the rate-distortion cost of the VCL coding units contained in the NAL unit. The sum of the rate-distortion costs of each coding unit contained in a NAL unit was used as the priority weighting. The preliminary results of extensive experiments have shown that all three schemes offered an improvement in PSNR, when comparing original and decoded received streams, over uncontrolled packet loss. Using the first method consistently delivered a significant average improvement of 0.97dB over the uncontrolled scenario while the second method provided a measurable, but less consistent, improvement across the range of testing conditions and encoder configurations.
DOT National Transportation Integrated Search
2012-10-01
In this report we present a transportation video coding and wireless transmission system specically tailored to automated : vehicle tracking applications. By taking into account the video characteristics and the lossy nature of the wireless channe...
Region-of-interest determination and bit-rate conversion for H.264 video transcoding
NASA Astrophysics Data System (ADS)
Huang, Shu-Fen; Chen, Mei-Juan; Tai, Kuang-Han; Li, Mian-Shiuan
2013-12-01
This paper presents a video bit-rate transcoder for baseline profile in H.264/AVC standard to fit the available channel bandwidth for the client when transmitting video bit-streams via communication channels. To maintain visual quality for low bit-rate video efficiently, this study analyzes the decoded information in the transcoder and proposes a Bayesian theorem-based region-of-interest (ROI) determination algorithm. In addition, a curve fitting scheme is employed to find the models of video bit-rate conversion. The transcoded video will conform to the target bit-rate by re-quantization according to our proposed models. After integrating the ROI detection method and the bit-rate transcoding models, the ROI-based transcoder allocates more coding bits to ROI regions and reduces the complexity of the re-encoding procedure for non-ROI regions. Hence, it not only keeps the coding quality but improves the efficiency of the video transcoding for low target bit-rates and makes the real-time transcoding more practical. Experimental results show that the proposed framework gets significantly better visual quality.
Method and system for efficient video compression with low-complexity encoder
NASA Technical Reports Server (NTRS)
Chen, Jun (Inventor); He, Dake (Inventor); Sheinin, Vadim (Inventor); Jagmohan, Ashish (Inventor); Lu, Ligang (Inventor)
2012-01-01
Disclosed are a method and system for video compression, wherein the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a video decoder, wherein the method for encoding includes the steps of converting a source frame into a space-frequency representation; estimating conditional statistics of at least one vector of space-frequency coefficients; estimating encoding rates based on the said conditional statistics; and applying Slepian-Wolf codes with the said computed encoding rates. The preferred method for decoding includes the steps of; generating a side-information vector of frequency coefficients based on previously decoded source data, encoder statistics, and previous reconstructions of the source frequency vector; and performing Slepian-Wolf decoding of at least one source frequency vector based on the generated side-information, the Slepian-Wolf code bits and the encoder statistics.
Pre-Exposure Prophylaxis YouTube Videos: Content Evaluation
Basch, Corey; Basch, Charles; Kernan, William
2018-01-01
Background Antiretroviral (ARV) medicines reduce the risk of transmitting the HIV virus and are recommended as daily pre-exposure prophylaxis (PrEP) in combination with safer sex practices for HIV-negative individuals at a high risk for infection, but are underused in HIV prevention. Previous literature suggests that YouTube is extensively used to share health information. While pre-exposure prophylaxis (PrEP) is a novel and promising approach to HIV prevention, there is limited understanding of YouTube videos as a source of information on PrEP. Objective The objective of this study was to describe the sources, characteristics, and content of the most widely viewed PrEP YouTube videos published up to October 1, 2016. Methods The keywords “pre-exposure prophylaxis” and “Truvada” were used to find 217 videos with a view count >100. Videos were coded for source, view count, length, number of comments, and selected aspects of content. Videos were also assessed for the most likely target audience. Results The total cumulative number of views was >2.3 million, however, a single Centers for Disease Control and Prevention video accounted for >1.2 million of the total cumulative views. A great majority (181/217, 83.4%) of the videos promoted the use of PrEP, whereas 60.8% (132/217) identified the specific target audience. In contrast, only 35.9% (78/217) of the videos mentioned how to obtain PrEP, whereas less than one third addressed the costs, side effects, and safety aspects relating to PrEP. Medical and academic institutions were the sources of the largest number of videos (66/217, 30.4%), followed by consumers (63/217, 29.0%), community-based organizations (CBO; 48/217, 22.1%), and media (40/217, 18.4%). Videos uploaded by the media sources were more likely to discuss the cost of PrEP (P<.001), whereas the use of PrEP was less likely to be promoted in videos uploaded by individual consumers (P=.002) and more likely to be promoted in videos originated by CBOs (P=.009). The most common target audience for the videos was gay and bisexual men. Conclusions YouTube videos can be used to share reliable PrEP information with individuals. Further research is needed to identify the best practices for using this medium to promote and increase PrEP uptake. PMID:29467119
Tsapatsoulis, Nicolas; Loizou, Christos; Pattichis, Constantinos
2007-01-01
Efficient medical video transmission over 3G wireless is of great importance for fast diagnosis and on site medical staff training purposes. In this paper we present a region of interest based ultrasound video compression study which shows that significant reduction of the required, for transmission, bit rate can be achieved without altering the design of existing video codecs. Simple preprocessing of the original videos to define visually and clinically important areas is the only requirement.
Laminar Jet Diffusion Flame Burning
NASA Technical Reports Server (NTRS)
2003-01-01
Study of the downlink data from the Laminar Soot Processes (LSP) experiment quickly resulted in discovery of a new mechanism of flame extinction caused by radiation of soot. Scientists found that the flames emit soot sooner than expected. These findings have direct impact on spacecraft fire safety, as well as the theories predicting the formation of soot -- which is a major factor as a pollutant and in the spread of unwanted fires. This sequence, using propane fuel, was taken STS-94, July 4 1997, MET:2/05:30 (approximate). LSP investigated fundamental questions regarding soot, a solid byproduct of the combustion of hydrocarbon fuels. The experiment was performed using a laminar jet diffusion flame, which is created by simply flowing fuel-like ethylene or propane -- through a nozzle and igniting it, much like a butane cigarette lighter. The LSP principal investigator was Gerard Faeth, University of Michigan, Arn Arbor. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1R mission (STS-94, July 1-17 1997). LSP results led to a reflight for extended investigations on the STS-107 research mission in January 2003. Advanced combustion experiments will be a part of investigations planned for the International Space Station. (983KB, 9-second MPEG, screen 320 x 240 pixels; downlinked video, higher quality not available) A still JPG composite of this movie is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300184.html.
Hayat, Umar; Lee, Peter J W; Lopez, Rocio; Vargo, John J; Rizk, Maged K
2016-11-01
Unsatisfactory bowel preparation has been reported in up to 33% of screening colonoscopies. Patients' lack of understanding about how a good bowel preparation can be achieved is one of the major causes. Patient education has been explored as a possible intervention to improve this important endpoint and has yielded mixed results. We compared the proportion of satisfactory bowel preparations and adenoma detection rates between patients who viewed and did not view an educational video on colonoscopy. An educational video on colonoscopy, accessible via the Internet, was issued to all patients with planned procedures between 2010 and 2014. Viewing status of the video was verified through a unique code linked to each patient's medical record. Excellent, good, or adequate bowel preparations were defined as "satisfactory," whereas fair, poor, or inadequate bowel preparations were defined as "unsatisfactory." A total of 2530 patients undergoing their first outpatient screening colonoscopy were included; 1251 patients viewed the educational video and 1279 patients did not see the video. Multivariate analysis revealed higher rates of satisfactory bowel preparation in the educational video group (92.3% [95% confidence interval [CI], 84.8-96.3] vs 87.4% [95% CI, 76.4-93.7], P <.001). Need for a repeat colonoscopy within 3 years was also higher in patients who did not see the video (6.6% [95% CI, 2.8-14.7] vs 3.3% [95% CI 1.3-7.8], P <.001). Patient-centered educational video improves bowel preparation quality and may reduce the need for an earlier repeat procedure in patients undergoing screening colonoscopy. Copyright © 2016 Elsevier Inc. All rights reserved.
Gabbett, Tim J
2013-08-01
The physical demands of rugby league, rugby union, and American football are significantly increased through the large number of collisions players are required to perform during match play. Because of the labor-intensive nature of coding collisions from video recordings, manufacturers of wearable microsensor (e.g., global positioning system [GPS]) units have refined the technology to automatically detect collisions, with several sport scientists attempting to use these microsensors to quantify the physical demands of collision sports. However, a question remains over the validity of these microtechnology units to quantify the contact demands of collision sports. Indeed, recent evidence has shown significant differences in the number of "impacts" recorded by microtechnology units (GPSports) and the actual number of collisions coded from video. However, a separate study investigated the validity of a different microtechnology unit (minimaxX; Catapult Sports) that included GPS and triaxial accelerometers, and also a gyroscope and magnetometer, to quantify collisions. Collisions detected by the minimaxX unit were compared with video-based coding of the actual events. No significant differences were detected in the number of mild, moderate, and heavy collisions detected via the minimaxX units and those coded from video recordings of the actual event. Furthermore, a strong correlation (r = 0.96, p < 0.01) was observed between collisions recorded via the minimaxX units and those coded from video recordings of the event. These findings demonstrate that only one commercially available and wearable microtechnology unit (minimaxX) can be considered capable of offering a valid method of quantifying the contact loads that typically occur in collision sports. Until such validation research is completed, sport scientists should be circumspect of the ability of other units to perform similar functions.
Feasibility of video codec algorithms for software-only playback
NASA Astrophysics Data System (ADS)
Rodriguez, Arturo A.; Morse, Ken
1994-05-01
Software-only video codecs can provide good playback performance in desktop computers with a 486 or 68040 CPU running at 33 MHz without special hardware assistance. Typically, playback of compressed video can be categorized into three tasks: the actual decoding of the video stream, color conversion, and the transfer of decoded video data from system RAM to video RAM. By current standards, good playback performance is the decoding and display of video streams of 320 by 240 (or larger) compressed frames at 15 (or greater) frames-per- second. Software-only video codecs have evolved by modifying and tailoring existing compression methodologies to suit video playback in desktop computers. In this paper we examine the characteristics used to evaluate software-only video codec algorithms, namely: image fidelity (i.e., image quality), bandwidth (i.e., compression) ease-of-decoding (i.e., playback performance), memory consumption, compression to decompression asymmetry, scalability, and delay. We discuss the tradeoffs among these variables and the compromises that can be made to achieve low numerical complexity for software-only playback. Frame- differencing approaches are described since software-only video codecs typically employ them to enhance playback performance. To complement other papers that appear in this session of the Proceedings, we review methods derived from binary pattern image coding since these methods are amenable for software-only playback. In particular, we introduce a novel approach called pixel distribution image coding.
NASA Technical Reports Server (NTRS)
2004-01-01
Ever wonder whether a still shot from a home video could serve as a "picture perfect" photograph worthy of being framed and proudly displayed on the mantle? Wonder no more. A critical imaging code used to enhance video footage taken from spaceborne imaging instruments is now available within a portable photography tool capable of producing an optimized, high-resolution image from multiple video frames.
Quality Scalability Aware Watermarking for Visual Content.
Bhowmik, Deepayan; Abhayaratne, Charith
2016-11-01
Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.
The Impact of Video Review on Supervisory Conferencing
ERIC Educational Resources Information Center
Baecher, Laura; McCormack, Bede
2015-01-01
This study investigated how video-based observation may alter the nature of post-observation talk between supervisors and teacher candidates. Audio-recorded post-observation conversations were coded using a conversation analysis framework and interpreted through the lens of interactional sociology. Findings suggest that video-based observations…
Combined Wavelet Video Coding and Error Control for Internet Streaming and Multicast
NASA Astrophysics Data System (ADS)
Chu, Tianli; Xiong, Zixiang
2003-12-01
This paper proposes an integrated approach to Internet video streaming and multicast (e.g., receiver-driven layered multicast (RLM) by McCanne) based on combined wavelet video coding and error control. We design a packetized wavelet video (PWV) coder to facilitate its integration with error control. The PWV coder produces packetized layered bitstreams that are independent among layers while being embedded within each layer. Thus, a lost packet only renders the following packets in the same layer useless. Based on the PWV coder, we search for a multilayered error-control strategy that optimally trades off source and channel coding for each layer under a given transmission rate to mitigate the effects of packet loss. While both the PWV coder and the error-control strategy are new—the former incorporates embedded wavelet video coding and packetization and the latter extends the single-layered approach for RLM by Chou et al.—the main distinction of this paper lies in the seamless integration of the two parts. Theoretical analysis shows a gain of up to 1 dB on a channel with 20% packet loss using our combined approach over separate designs of the source coder and the error-control mechanism. This is also substantiated by our simulations with a gain of up to 0.6 dB. In addition, our simulations show a gain of up to 2.2 dB over previous results reported by Chou et al.
MATIN: A Random Network Coding Based Framework for High Quality Peer-to-Peer Live Video Streaming
Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño
2013-01-01
In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay. PMID:23940530
Zika Virus on YouTube: An Analysis of English-language Video Content by Source
2017-01-01
Objectives The purpose of this study was to describe the source, length, number of views, and content of the most widely viewed Zika virus (ZIKV)-related YouTube videos. We hypothesized that ZIKV-related videos uploaded by different sources contained different content. Methods The 100 most viewed English ZIKV-related videos were manually coded and analyzed statistically. Results Among the 100 videos, there were 43 consumer-generated videos, 38 Internet-based news videos, 15 TV-based news videos, and 4 professional videos. Internet news sources captured over two-thirds of the total of 8 894 505 views. Compared with consumer-generated videos, Internet-based news videos were more likely to mention the impact of ZIKV on babies (odds ratio [OR], 6.25; 95% confidence interval [CI], 1.64 to 23.76), the number of cases in Latin America (OR, 5.63; 95% CI, 1.47 to 21.52); and ZIKV in Africa (OR, 2.56; 95% CI, 1.04 to 6.31). Compared with consumer-generated videos, TV-based news videos were more likely to express anxiety or fear of catching ZIKV (OR, 6.67; 95% CI, 1.36 to 32.70); to highlight fear of ZIKV among members of the public (OR, 7.45; 95% CI, 1.20 to 46.16); and to discuss avoiding pregnancy (OR, 3.88; 95% CI, 1.13 to 13.25). Conclusions Public health agencies should establish a larger presence on YouTube to reach more people with evidence-based information about ZIKV. PMID:28372356
Inactivation of prion proteins via covalent grafting with methoxypoly(ethylene glycol).
Scott, Mark D
2006-01-01
Transmissible spongiform encephalopathies (TSE) such as bovine spongiform encephalitis (BSE), Creutzfeld-Jakob disease (CJD) as well as other proteinaceous infectious particles (prions) mediated diseases have emerged as a significant concern in transfusion medicine. This concern is derived from both the disease causing potential of prion contaminated blood products but also due to tremendous impact of the active deferral of current and potential blood donors due to their extended stays in BSE prevalent countries (e.g., the United Kingdom). To date, there are no effective means by which infectious prion proteins can be inactivated in cellular and acellular blood products. Based on current work on the covalent grafting of methoxypoly(ethylene glycol) [mPEG] to proteins, viruses, and anuclear, and nucleated cells, it is hypothesized that the conversion of the normal PrP protein to its mutant conformation can be prevented by the covalent grafting of mPEG to the mutant protein. Inactivation of infective protein particles (prions) in both cellular blood products as well as cell free solutions (e.g., clotting factors) could be of medical/commercial value. It is hypothesized that consequent to the covalent modification of donor-derived prions with mPEG the requisite nucleation of the normal and mutant PrP proteins is inhibited due to the increased solubility of the modified mutant PrP and that the conformational conversion arising from the mutant PrP is prevented due to obscuration of protein charge by the heavily hydrated and neutral mPEG polymers, as well as by direct steric hindrance of the interaction due to the highly mobile polymer graft.
A hardware-oriented concurrent TZ search algorithm for High-Efficiency Video Coding
NASA Astrophysics Data System (ADS)
Doan, Nghia; Kim, Tae Sung; Rhee, Chae Eun; Lee, Hyuk-Jae
2017-12-01
High-Efficiency Video Coding (HEVC) is the latest video coding standard, in which the compression performance is double that of its predecessor, the H.264/AVC standard, while the video quality remains unchanged. In HEVC, the test zone (TZ) search algorithm is widely used for integer motion estimation because it effectively searches the good-quality motion vector with a relatively small amount of computation. However, the complex computation structure of the TZ search algorithm makes it difficult to implement it in the hardware. This paper proposes a new integer motion estimation algorithm which is designed for hardware execution by modifying the conventional TZ search to allow parallel motion estimations of all prediction unit (PU) partitions. The algorithm consists of the three phases of zonal, raster, and refinement searches. At the beginning of each phase, the algorithm obtains the search points required by the original TZ search for all PU partitions in a coding unit (CU). Then, all redundant search points are removed prior to the estimation of the motion costs, and the best search points are then selected for all PUs. Compared to the conventional TZ search algorithm, experimental results show that the proposed algorithm significantly decreases the Bjøntegaard Delta bitrate (BD-BR) by 0.84%, and it also reduces the computational complexity by 54.54%.
Side-information-dependent correlation channel estimation in hash-based distributed video coding.
Deligiannis, Nikos; Barbarien, Joeri; Jacobs, Marc; Munteanu, Adrian; Skodras, Athanassios; Schelkens, Peter
2012-04-01
In the context of low-cost video encoding, distributed video coding (DVC) has recently emerged as a potential candidate for uplink-oriented applications. This paper builds on a concept of correlation channel (CC) modeling, which expresses the correlation noise as being statistically dependent on the side information (SI). Compared with classical side-information-independent (SII) noise modeling adopted in current DVC solutions, it is theoretically proven that side-information-dependent (SID) modeling improves the Wyner-Ziv coding performance. Anchored in this finding, this paper proposes a novel algorithm for online estimation of the SID CC parameters based on already decoded information. The proposed algorithm enables bit-plane-by-bit-plane successive refinement of the channel estimation leading to progressively improved accuracy. Additionally, the proposed algorithm is included in a novel DVC architecture that employs a competitive hash-based motion estimation technique to generate high-quality SI at the decoder. Experimental results corroborate our theoretical gains and validate the accuracy of the channel estimation algorithm. The performance assessment of the proposed architecture shows remarkable and consistent coding gains over a germane group of state-of-the-art distributed and standard video codecs, even under strenuous conditions, i.e., large groups of pictures and highly irregular motion content.
Distributed Coding/Decoding Complexity in Video Sensor Networks
Cordeiro, Paulo J.; Assunção, Pedro
2012-01-01
Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality. PMID:22736972
Distributed coding/decoding complexity in video sensor networks.
Cordeiro, Paulo J; Assunção, Pedro
2012-01-01
Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.
A robust low-rate coding scheme for packet video
NASA Technical Reports Server (NTRS)
Chen, Y. C.; Sayood, Khalid; Nelson, D. J.; Arikan, E. (Editor)
1991-01-01
Due to the rapidly evolving field of image processing and networking, video information promises to be an important part of telecommunication systems. Although up to now video transmission has been transported mainly over circuit-switched networks, it is likely that packet-switched networks will dominate the communication world in the near future. Asynchronous transfer mode (ATM) techniques in broadband-ISDN can provide a flexible, independent and high performance environment for video communication. For this paper, the network simulator was used only as a channel in this simulation. Mixture blocking coding with progressive transmission (MBCPT) has been investigated for use over packet networks and has been found to provide high compression rate with good visual performance, robustness to packet loss, tractable integration with network mechanics and simplicity in parallel implementation.
Analysis of visual quality improvements provided by known tools for HDR content
NASA Astrophysics Data System (ADS)
Kim, Jaehwan; Alshina, Elena; Lee, JongSeok; Park, Youngo; Choi, Kwang Pyo
2016-09-01
In this paper, the visual quality of different solutions for high dynamic range (HDR) compression using MPEG test contents is analyzed. We also simulate the method for an efficient HDR compression which is based on statistical property of the signal. The method is compliant with HEVC specification and also easily compatible with other alternative methods which might require HEVC specification changes. It was subjectively tested on commercial TVs and compared with alternative solutions for HDR coding. Subjective visual quality tests were performed using SUHD TVs model which is SAMSUNG JS9500 with maximum luminance up to 1000nit in test. The solution that is based on statistical property shows not only improvement of objective performance but improvement of visual quality compared to other HDR solutions, while it is compatible with HEVC specification.
Synthesis and Characterization of a Poly(ethylene glycol)-Poly(simvastatin) Diblock Copolymer
Asafo-Adjei, Theodora A.; Dziubla, Thomas D.; Puleo, David A.
2014-01-01
Biodegradable polyesters are commonly used as drug delivery vehicles, but their role is typically passive, and encapsulation approaches have limited drug payload. An alternative drug delivery method is to polymerize the active agent or its precursor into a degradable polymer. The prodrug simvastatin contains a lactone ring that lends itself to ring-opening polymerization (ROP). Consequently, simvastatin polymerization was initiated with 5 kDa monomethyl ether poly(ethylene glycol) (mPEG) and catalyzed via stannous octoate. Melt condensation reactions produced a 9.5 kDa copolymer with a polydispersity index of 1.1 at 150 °C up to a 75 kDa copolymer with an index of 6.9 at 250 °C. Kinetic analysis revealed first-order propagation rates. Infrared spectroscopy of the copolymer showed carboxylic and methyl ether stretches unique to simvastatin and mPEG, respectively. Slow degradation was demonstrated in neutral and alkaline conditions. Lastly, simvastatin, simvastatin-incorporated molecules, and mPEG were identified as the degradation products released. The present results show the potential of using ROP to polymerize lactone-containing drugs such as simvastatin. PMID:25431653
NASA Astrophysics Data System (ADS)
Walton, James S.; Hodgson, Peter; Hallamasek, Karen; Palmer, Jake
2003-07-01
4DVideo is creating a general purpose capability for capturing and analyzing kinematic data from video sequences in near real-time. The core element of this capability is a software package designed for the PC platform. The software ("4DCapture") is designed to capture and manipulate customized AVI files that can contain a variety of synchronized data streams -- including audio, video, centroid locations -- and signals acquired from more traditional sources (such as accelerometers and strain gauges.) The code includes simultaneous capture or playback of multiple video streams, and linear editing of the images (together with the ancilliary data embedded in the files). Corresponding landmarks seen from two or more views are matched automatically, and photogrammetric algorithms permit multiple landmarks to be tracked in two- and three-dimensions -- with or without lens calibrations. Trajectory data can be processed within the main application or they can be exported to a spreadsheet where they can be processed or passed along to a more sophisticated, stand-alone, data analysis application. Previous attempts to develop such applications for high-speed imaging have been limited in their scope, or by the complexity of the application itself. 4DVideo has devised a friendly ("FlowStack") user interface that assists the end-user to capture and treat image sequences in a natural progression. 4DCapture employs the AVI 2.0 standard and DirectX technology which effectively eliminates the file size limitations found in older applications. In early tests, 4DVideo has streamed three RS-170 video sources to disk for more than an hour without loss of data. At this time, the software can acquire video sequences in three ways: (1) directly, from up to three hard-wired cameras supplying RS-170 (monochrome) signals; (2) directly, from a single camera or video recorder supplying an NTSC (color) signal; and (3) by importing existing video streams in the AVI 1.0 or AVI 2.0 formats. The latter is particularly useful for high-speed applications where the raw images are often captured and stored by the camera before being downloaded. Provision has been made to synchronize data acquired from any combination of these video sources using audio and visual "tags." Additional "front-ends," designed for digital cameras, are anticipated.
ERIC Educational Resources Information Center
Oggins, Jean; Sammis, Jeffrey
2012-01-01
In this study, 438 players of the online video game, World of Warcraft, completed a survey about video game addiction and answered an open-ended question about behaviors they considered characteristic of video game addiction. Responses were coded and correlated with players' self-reports of being addicted to games and scores on a modified video…
Depiction of Health Effects of Electronic Cigarettes on YouTube
Merianos, Ashley L.; Gittens, Olivia E.; Mahabee-Gittens, E. Melinda
2016-01-01
Background This study was conducted to assess the quantity, quality, and reach of e-cigarette health effects YouTube videos, and to quantify the description of positive and negative e-cigarette health effects and promotional content in each video. Method Searches for videos were conducted in 2015 using the YouTube search engine, and the top 20 search results by relevance and view count were identified. Videos were classified by educational/medical news, advertising/marketing, and personal/testimonial categories. A coding sheet was used to assess the presence or absence of negative and positive health effects, and promotional content. Results Of the 320 videos retrieved, only 55 unique videos were included. The majority of videos (46.9%) were educational/medical/news, 29.7% were personal/testimonial, and 23.4% were advertising/marketing. The three most common negative health effects included discussing nicotine, e-cigarettes not being FDA regulated, and known and unknown health consequences related to e-cigarette use. The top positive health effects discussed were how e-cigarettes can help individuals quit smoking, e-cigarettes are healthier than smoking, and e-cigarettes have no smoke or secondhand smoke exposure. Conclusions It is critical to monitor YouTube health effects content and develop appropriate messages to inform consumers about the risks associated with use while mitigating misleading information presented. PMID:28217030
Depiction of Health Effects of Electronic Cigarettes on YouTube.
Merianos, Ashley L; Gittens, Olivia E; Mahabee-Gittens, E Melinda
2016-01-01
This study was conducted to assess the quantity, quality, and reach of e-cigarette health effects YouTube videos, and to quantify the description of positive and negative e-cigarette health effects and promotional content in each video. Searches for videos were conducted in 2015 using the YouTube search engine, and the top 20 search results by relevance and view count were identified. Videos were classified by educational/medical news, advertising/marketing, and personal/testimonial categories. A coding sheet was used to assess the presence or absence of negative and positive health effects, and promotional content. Of the 320 videos retrieved, only 55 unique videos were included. The majority of videos (46.9%) were educational/medical/news, 29.7% were personal/testimonial, and 23.4% were advertising/marketing. The three most common negative health effects included discussing nicotine, e-cigarettes not being FDA regulated, and known and unknown health consequences related to e-cigarette use. The top positive health effects discussed were how e-cigarettes can help individuals quit smoking, e-cigarettes are healthier than smoking, and e-cigarettes have no smoke or secondhand smoke exposure. It is critical to monitor YouTube health effects content and develop appropriate messages to inform consumers about the risks associated with use while mitigating misleading information presented.
Portrayal of Alcohol Intoxication on YouTube
Primack, Brian A.; Colditz, Jason B.; Pang, Kevin C.; Jackson, Kristina M.
2015-01-01
Background We aimed to characterize the content of leading YouTube videos related to alcohol intoxication and to examine factors associated with alcohol intoxication in videos that were assessed positively by viewers. Methods We systematically captured the 70 most relevant and popular videos on YouTube related to alcohol intoxication. We employed an iterative process to codebook development which resulted in 42 codes in 6 categories: video characteristics, character socio-demographics, alcohol depiction, degree of alcohol use, characteristics associated with alcohol, and consequences of alcohol. Results There were a total of 333,246,875 views for all videos combined. While 89% of videos involved males, only 49% involved females. The videos had a median of 1646 (IQR 300-22,969) “like” designations and 33 (IQR 14-1,261) “dislike” designations each. Liquor was most frequently represented, followed by beer and then wine/champagne. Nearly one-half (44%) of videos contained a brand reference. Humor was juxtaposed with alcohol use in 79% of videos, and motor vehicle use was present in 24%. There were significantly more likes per dislike, indicating more positive sentiment, when there was representation of liquor (29.1 vs. 11.4, p = .008), brand references (32.1 vs. 19.2, p = .04), and/or physical attractiveness (67.5 vs. 17.8, p < .001). Conclusions Internet videos depicting alcohol intoxication are heavily viewed. Nearly half of these videos involve a brand-name reference. While these videos commonly juxtapose alcohol intoxication with characteristics such as humor and attractiveness, they infrequently depict negative clinical outcomes. The popularity of this site may provide an opportunity for public health intervention. PMID:25703135
Comparison of three coding strategies for a low cost structure light scanner
NASA Astrophysics Data System (ADS)
Xiong, Hanwei; Xu, Jun; Xu, Chenxi; Pan, Ming
2014-12-01
Coded structure light is widely used for 3D scanning, and different coding strategies are adopted to suit for different goals. In this paper, three coding strategies are compared, and one of them is selected to implement a low cost structure light scanner under the cost of €100. To reach this goal, the projector and the video camera must be the cheapest, which will lead to some problems related to light coding. For a cheapest projector, complex intensity pattern can't be generated; even if it can be generated, it can't be captured by a cheapest camera. Based on Gray code, three different strategies are implemented and compared, called phase-shift, line-shift, and bit-shift, respectively. The bit-shift Gray code is the contribution of this paper, in which a simple, stable light pattern is used to generate dense(mean points distance<0.4mm) and accurate(mean error<0.1mm) results. The whole algorithm details and some example are presented in the papers.
Progressive Dictionary Learning with Hierarchical Predictive Structure for Scalable Video Coding.
Dai, Wenrui; Shen, Yangmei; Xiong, Hongkai; Jiang, Xiaoqian; Zou, Junni; Taubman, David
2017-04-12
Dictionary learning has emerged as a promising alternative to the conventional hybrid coding framework. However, the rigid structure of sequential training and prediction degrades its performance in scalable video coding. This paper proposes a progressive dictionary learning framework with hierarchical predictive structure for scalable video coding, especially in low bitrate region. For pyramidal layers, sparse representation based on spatio-temporal dictionary is adopted to improve the coding efficiency of enhancement layers (ELs) with a guarantee of reconstruction performance. The overcomplete dictionary is trained to adaptively capture local structures along motion trajectories as well as exploit the correlations between neighboring layers of resolutions. Furthermore, progressive dictionary learning is developed to enable the scalability in temporal domain and restrict the error propagation in a close-loop predictor. Under the hierarchical predictive structure, online learning is leveraged to guarantee the training and prediction performance with an improved convergence rate. To accommodate with the stateof- the-art scalable extension of H.264/AVC and latest HEVC, standardized codec cores are utilized to encode the base and enhancement layers. Experimental results show that the proposed method outperforms the latest SHVC and HEVC simulcast over extensive test sequences with various resolutions.
Using QR Codes to Differentiate Learning for Gifted and Talented Students
ERIC Educational Resources Information Center
Siegle, Del
2015-01-01
QR codes are two-dimensional square patterns that are capable of coding information that ranges from web addresses to links to YouTube video. The codes save time typing and eliminate errors in entering addresses incorrectly. These codes make learning with technology easier for students and motivationally engage them in news ways.
Exclusively visual analysis of classroom group interactions
NASA Astrophysics Data System (ADS)
Tucker, Laura; Scherr, Rachel E.; Zickler, Todd; Mazur, Eric
2016-12-01
Large-scale audiovisual data that measure group learning are time consuming to collect and analyze. As an initial step towards scaling qualitative classroom observation, we qualitatively coded classroom video using an established coding scheme with and without its audio cues. We find that interrater reliability is as high when using visual data only—without audio—as when using both visual and audio data to code. Also, interrater reliability is high when comparing use of visual and audio data to visual-only data. We see a small bias to code interactions as group discussion when visual and audio data are used compared with video-only data. This work establishes that meaningful educational observation can be made through visual information alone. Further, it suggests that after initial work to create a coding scheme and validate it in each environment, computer-automated visual coding could drastically increase the breadth of qualitative studies and allow for meaningful educational analysis on a far greater scale.
Magnetic field exposure and behavioral monitoring system.
Thomas, A W; Drost, D J; Prato, F S
2001-09-01
To maximize the availability and usefulness of a small magnetic field exposure laboratory, we designed a magnetic field exposure system that has been used to test human subjects, caged or confined animals, and cell cultures. The magnetic field exposure system consists of three orthogonal pairs of coils 2 m square x 1 m separation, 1.751 m x 0.875 m separation, and 1.5 m x 0.75 m separation. Each coil consisted of ten turns of insulated 8 gauge stranded copper conductor. Each of the pairs were driven by a constant-current amplifier via digital to analog (D/A) converter. A 9 pole zero-gain active Bessel low-pass filter (1 kHz corner frequency) before the amplifier input attenuated the expected high frequencies generated by the D/A conversion. The magnetic field was monitored with a 3D fluxgate magnetometer (0-3 kHz, +/- 1 mT) through an analog to digital converter. Behavioral monitoring utilized two monochrome video cameras (viewing the coil center vertically and horizontally), both of which could be video recorded and real-time digitally Moving Picture Experts Group (MPEG) encoded to CD-ROM. Human postural sway (standing balance) was monitored with a 3D forceplate mounted on the floor, connected to an analog to digital converter. Lighting was provided by 12 offset overhead dimmable fluorescent track lights and monitored using a digitally connected spectroradiometer. The dc resistance, inductance of each coil pair connected in series were 1.5 m coil (0.27 Omega, 1.2 mH), 1.75 m coil (0.32 Omega, 1.4 mH), and 2 m coil (0.38 Omega, 1.6 mH). The frequency response of the 1.5 m coil set was 500 Hz at +/- 463 microT, 1 kHz at +/- 232 microT, 150 micros rise time from -200 microT(pk) to + 200 microT(pk) (square wave) and is limited by the maximum voltage ( +/- 146 V) of the amplifier (Bessel filter bypassed). Copyright 2001 Wiley-Liss, Inc.
Compression of computer generated phase-shifting hologram sequence using AVC and HEVC
NASA Astrophysics Data System (ADS)
Xing, Yafei; Pesquet-Popescu, Béatrice; Dufaux, Frederic
2013-09-01
With the capability of achieving twice the compression ratio of Advanced Video Coding (AVC) with similar reconstruction quality, High Efficiency Video Coding (HEVC) is expected to become the newleading technique of video coding. In order to reduce the storage and transmission burden of digital holograms, in this paper we propose to use HEVC for compressing the phase-shifting digital hologram sequences (PSDHS). By simulating phase-shifting digital holography (PSDH) interferometry, interference patterns between illuminated three dimensional( 3D) virtual objects and the stepwise phase changed reference wave are generated as digital holograms. The hologram sequences are obtained by the movement of the virtual objects and compressed by AVC and HEVC. The experimental results show that AVC and HEVC are efficient to compress PSDHS, with HEVC giving better performance. Good compression rate and reconstruction quality can be obtained with bitrate above 15000kbps.
Meeting highlights applications of Nek5000 simulation code | Argonne
Photos Videos Fact Sheets, Brochures and Reports Summer Science Writing Internship Careers Education Photos Videos Fact Sheets, Brochures and Reports Summer Science Writing Internship Meeting highlights
Cranwell, Jo; Britton, John; Bains, Manpreet
2017-02-01
The purpose of the present study is to describe the portrayal of alcohol content in popular YouTube music videos. We used inductive thematic analysis to explore the lyrics and visual imagery in 49 UK Top 40 songs and music videos previously found to contain alcohol content and watched by many British adolescents aged between 11 and 18 years and to examine if branded content contravened alcohol industry advertising codes of practice. The analysis generated three themes. First, alcohol content was associated with sexualised imagery or lyrics and the objectification of women. Second, alcohol was associated with image, lifestyle and sociability. Finally, some videos showed alcohol overtly encouraging excessive drinking and drunkenness, including those containing branding, with no negative consequences to the drinker. Our results suggest that YouTube music videos promote positive associations with alcohol use. Further, several alcohol companies adopt marketing strategies in the video medium that are entirely inconsistent with their own or others agreed advertising codes of practice. We conclude that, as a harm reduction measure, policies should change to prevent adolescent exposure to the positive promotion of alcohol and alcohol branding in music videos.
Chung, Kuo-Liang; Huang, Chi-Chao; Hsu, Tsu-Chun
2017-09-04
In this paper, we propose a novel adaptive chroma subsampling-binding and luma-guided (ASBLG) chroma reconstruction method for screen content images (SCIs). After receiving the decoded luma and subsampled chroma image from the decoder, a fast winner-first voting strategy is proposed to identify the used chroma subsampling scheme prior to compression. Then, the decoded luma image is subsampled as the identified subsampling scheme was performed on the chroma image such that we are able to conclude an accurate correlation between the subsampled decoded luma image and the decoded subsampled chroma image. Accordingly, an adaptive sliding window-based and luma-guided chroma reconstruction method is proposed. The related computational complexity analysis is also provided. We take two quality metrics, the color peak signal-to-noise ratio (CPSNR) of the reconstructed chroma images and SCIs and the gradient-based structure similarity index (CGSS) of the reconstructed SCIs to evaluate the quality performance. Let the proposed chroma reconstruction method be denoted as 'ASBLG'. Based on 26 typical test SCIs and 6 JCT-VC test screen content video sequences (SCVs), several experiments show that on average, the CPSNR gains of all the reconstructed UV images by 4:2:0(A)-ASBLG, SCIs by 4:2:0(MPEG-B)-ASBLG, and SCVs by 4:2:0(A)-ASBLG are 2.1 dB, 1.87 dB, and 1.87 dB, respectively, when compared with that of the other combinations. Specifically, in terms of CPSNR and CGSS, CSBILINEAR-ASBLG for the test SCIs and CSBICUBIC-ASBLG for the test SCVs outperform the existing state-of-the-art comparative combinations, where CSBILINEAR and CSBICUBIC denote the luma-aware based chroma subsampling schemes by Wang et al.
Student learning outcomes associated with video vs. paper cases in a public health dentistry course.
Chi, Donald L; Pickrell, Jacqueline E; Riedy, Christine A
2014-01-01
Educational technologies such as video cases can improve health professions student learning outcomes, but few studies in dentistry have evaluated video-based technologies. The goal of this study was to compare outcomes associated with video and paper cases used in an introductory public health dentistry course. This was a retrospective cohort study with a historical control group. Based on dual coding theory, the authors tested the hypotheses that dental students who received a video case (n=37) would report better affective, cognitive, and overall learning outcomes than students who received a paper case (n=75). One-way ANOVA was used to test the hypotheses across ten cognitive, two affective, and one general assessment measures (α=0.05). Students in the video group reported a significantly higher overall mean effectiveness score than students in the paper group (4.2 and 3.3, respectively; p<0.001). Video cases were also associated with significantly higher mean scores across the remaining twelve measures and were effective in helping students achieve cognitive (e.g., facilitating good discussions, identifying public health problems, realizing how health disparities might impact their future role as dentists) and affective (e.g., empathizing with vulnerable individuals, appreciating how health disparities impact real people) goals. Compared to paper cases, video cases significantly improved cognitive, affective, and overall learning outcomes for dental students.
Development of a Video-Based Evaluation Tool in Rett Syndrome
ERIC Educational Resources Information Center
Fyfe, S.; Downs, J.; McIlroy, O.; Burford, B.; Lister, J.; Reilly, S.; Laurvick, C. L.; Philippe, C.; Msall, M.; Kaufmann, W. E.; Ellaway, C.; Leonard, H.
2007-01-01
This paper describes the development of a video-based evaluation tool for use in Rett syndrome (RTT). Components include a parent-report checklist, and video filming and coding protocols that contain items on eating, drinking, communication, hand function and movements, personal care and mobility. Ninety-seven of the 169 families who initially…
Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos
NASA Astrophysics Data System (ADS)
Xu, Dawen; Wang, Rangding
2015-05-01
A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.
[Development of a video image system for wireless capsule endoscopes based on DSP].
Yang, Li; Peng, Chenglin; Wu, Huafeng; Zhao, Dechun; Zhang, Jinhua
2008-02-01
A video image recorder to record video picture for wireless capsule endoscopes was designed. TMS320C6211 DSP of Texas Instruments Inc. is the core processor of this system. Images are periodically acquired from Composite Video Broadcast Signal (CVBS) source and scaled by video decoder (SAA7114H). Video data is transported from high speed buffer First-in First-out (FIFO) to Digital Signal Processor (DSP) under the control of Complex Programmable Logic Device (CPLD). This paper adopts JPEG algorithm for image coding, and the compressed data in DSP was stored to Compact Flash (CF) card. TMS320C6211 DSP is mainly used for image compression and data transporting. Fast Discrete Cosine Transform (DCT) algorithm and fast coefficient quantization algorithm are used to accelerate operation speed of DSP and decrease the executing code. At the same time, proper address is assigned for each memory, which has different speed;the memory structure is also optimized. In addition, this system uses plenty of Extended Direct Memory Access (EDMA) to transport and process image data, which results in stable and high performance.
Tactile Cueing for Target Acquisition and Identification
2005-09-01
method of coding tactile information, and the method of presenting elevation information were studied. Results: Subjects were divided into video game experienced...VGP) subjects and non- video game (NVGP) experienced subjects. VGPs showed a significantly lower’ target acquisition time with the 12...that video game players performed better with the highest level of tactile resolution, while non- video game players performed better with simpler pattern and a lower resolution display.
ERIC Educational Resources Information Center
King, Keith; Laake, Rebecca A.; Bernard, Amy
2006-01-01
This study examined the sexual messages depicted in music videos aired on MTV, MTV2, BET, and GAC from August 2, 2004 to August 15, 2004. One-hour segments of music videos were taped daily for two weeks. Depictions of sexual attire and sexual behavior were analyzed via a four-page coding sheet (interrater-reliability = 0.93). Results indicated…
Analysis of view synthesis prediction architectures in modern coding standards
NASA Astrophysics Data System (ADS)
Tian, Dong; Zou, Feng; Lee, Chris; Vetro, Anthony; Sun, Huifang
2013-09-01
Depth-based 3D formats are currently being developed as extensions to both AVC and HEVC standards. The availability of depth information facilitates the generation of intermediate views for advanced 3D applications and displays, and also enables more efficient coding of the multiview input data through view synthesis prediction techniques. This paper outlines several approaches that have been explored to realize view synthesis prediction in modern video coding standards such as AVC and HEVC. The benefits and drawbacks of various architectures are analyzed in terms of performance, complexity, and other design considerations. It is hence concluded that block-based VSP prediction for multiview video signals provides attractive coding gains with comparable complexity as traditional motion/disparity compensation.
47 CFR 0.457 - Records not routinely available for public inspection.
Code of Federal Regulations, 2012 CFR
2012-10-01
... inspection. (5) Section 1905 of the federal criminal code, the Trade Secrets Act, 18 U.S.C. 1905, prohibits... inspection, 5 U.S.C. 552(b)(4) and 18 U.S.C. 1905. (1) The materials listed in this paragraph have been... video programming distributors. (v) The rates, terms and conditions in any agreement between a U.S...