Science.gov

Sample records for advanced video coding

  1. Multiview-video-plus-depth coding based on the advanced video coding standard.

    PubMed

    Hannuksela, Miska M; Rusanovskyy, Dmytro; Su, Wenyi; Chen, Lulu; Li, Ri; Aflaki, Payman; Lan, Deyan; Joachimiak, Michal; Li, Houqiang; Gabbouj, Moncef

    2013-09-01

    This paper presents a multiview-video-plus-depth coding scheme, which is compatible with the advanced video coding (H.264/AVC) standard and its multiview video coding (MVC) extension. This scheme introduces several encoding and in-loop coding tools for depth and texture video coding, such as depth-based texture motion vector prediction, depth-range-based weighted prediction, joint inter-view depth filtering, and gradual view refresh. The presented coding scheme is submitted to the 3D video coding (3DV) call for proposals (CfP) of the Moving Picture Experts Group standardization committee. When measured with commonly used objective metrics against the MVC anchor, the proposed scheme provides an average bitrate reduction of 26% and 35% for the 3DV CfP test scenarios with two and three views, respectively. The observed bitrate reduction is similar according to an analysis of the results obtained for the subjective tests on the 3DV CfP submissions. PMID:23797252

  2. Film grain noise modeling in advanced video coding

    NASA Astrophysics Data System (ADS)

    Oh, Byung Tae; Kuo, C.-C. Jay; Sun, Shijun; Lei, Shawmin

    2007-01-01

    A new technique for film grain noise extraction, modeling and synthesis is proposed and applied to the coding of high definition video in this work. The film grain noise is viewed as a part of artistic presentation by people in the movie industry. On one hand, since the film grain noise can boost the natural appearance of pictures in high definition video, it should be preserved in high-fidelity video processing systems. On the other hand, video coding with film grain noise is expensive. It is desirable to extract film grain noise from the input video as a pre-processing step at the encoder and re-synthesize the film grain noise and add it back to the decoded video as a post-processing step at the decoder. Under this framework, the coding gain of the denoised video is higher while the quality of the final reconstructed video can still be well preserved. Following this idea, we present a method to remove film grain noise from image/video without distorting its original content. Besides, we describe a parametric model containing a small set of parameters to represent the extracted film grain noise. The proposed model generates the film grain noise that is close to the real one in terms of power spectral density and cross-channel spectral correlation. Experimental results are shown to demonstrate the efficiency of the proposed scheme.

  3. Performance and Complexity Co-evaluation of the Advanced Video Coding Standard for Cost-Effective Multimedia Communications

    NASA Astrophysics Data System (ADS)

    Saponara, Sergio; Denolf, Kristof; Lafruit, Gauthier; Blanch, Carolina; Bormans, Jan

    2004-12-01

    The advanced video codec (AVC) standard, recently defined by a joint video team (JVT) of ITU-T and ISO/IEC, is introduced in this paper together with its performance and complexity co-evaluation. While the basic framework is similar to the motion-compensated hybrid scheme of previous video coding standards, additional tools improve the compression efficiency at the expense of an increased implementation cost. As a first step to bridge the gap between the algorithmic design of a complex multimedia system and its cost-effective realization, a high-level co-evaluation approach is proposed and applied to a real-life AVC design. An exhaustive analysis of the codec compression efficiency versus complexity (memory and computational costs) project space is carried out at the early algorithmic design phase. If all new coding features are used, the improved AVC compression efficiency (up to 50% compared to current video coding technology) comes with a complexity increase of a factor 2 for the decoder and larger than one order of magnitude for the encoder. This represents a challenge for resource-constrained multimedia systems such as wireless devices or high-volume consumer electronics. The analysis also highlights important properties of the AVC framework allowing for complexity reduction at the high system level: when combining the new coding features, the implementation complexity accumulates, while the global compression efficiency saturates. Thus, a proper use of the AVC tools maintains the same performance as the most complex configuration while considerably reducing complexity. The reported results provide inputs to assist the profile definition in the standard, highlight the AVC bottlenecks, and select optimal trade-offs between algorithmic performance and complexity.

  4. Video coding with dynamic background

    NASA Astrophysics Data System (ADS)

    Paul, Manoranjan; Lin, Weisi; Lau, Chiew Tong; Lee, Bu-Sung

    2013-12-01

    Motion estimation (ME) and motion compensation (MC) using variable block size, sub-pixel search, and multiple reference frames (MRFs) are the major reasons for improved coding performance of the H.264 video coding standard over other contemporary coding standards. The concept of MRFs is suitable for repetitive motion, uncovered background, non-integer pixel displacement, lighting change, etc. The requirement of index codes of the reference frames, computational time in ME & MC, and memory buffer for coded frames limits the number of reference frames used in practical applications. In typical video sequences, the previous frame is used as a reference frame with 68-92% of cases. In this article, we propose a new video coding method using a reference frame [i.e., the most common frame in scene (McFIS)] generated by dynamic background modeling. McFIS is more effective in terms of rate-distortion and computational time performance compared to the MRFs techniques. It has also inherent capability of scene change detection (SCD) for adaptive group of picture (GOP) size determination. As a result, we integrate SCD (for GOP determination) with reference frame generation. The experimental results show that the proposed coding scheme outperforms the H.264 video coding with five reference frames and the two relevant state-of-the-art algorithms by 0.5-2.0 dB with less computational time.

  5. Huffman coding in advanced audio coding standard

    NASA Astrophysics Data System (ADS)

    Brzuchalski, Grzegorz

    2012-05-01

    This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.

  6. Segmentation-based video coding

    SciTech Connect

    Lades, M.; Wong, Yiu-fai; Li, Qi

    1995-10-01

    Low bit rate video coding is gaining attention through a current wave of consumer oriented multimedia applications which aim, e.g., for video conferencing over telephone lines or for wireless communication. In this work we describe a new segmentation-based approach to video coding which belongs to a class of paradigms appearing very promising among the various proposed methods. Our method uses a nonlinear measure of local variance to identify the smooth areas in an image in a more indicative and robust fashion: First, the local minima in the variance image are identified. These minima then serve as seeds for the segmentation of the image with a watershed algorithm. Regions and their contours are extracted. Motion compensation is used to predict the change of regions between previous frames and the current frame. The error signal is then quantized. To reduce the number of regions and contours, we use the motion information to assist the segmentation process, to merge regions, resulting in a further reduction in bit rate. Our scheme has been tested and good results have been obtained.

  7. Memory-efficient table look-up optimized algorithm for context-based adaptive variable length decoding in H.264/advanced video coding

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong

    2016-03-01

    Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.

  8. Error resiliency of distributed video coding in wireless video communication

    NASA Astrophysics Data System (ADS)

    Ye, Shuiming; Ouaret, Mourad; Dufaux, Frederic; Ansorge, Michael; Ebrahimi, Touradj

    2008-08-01

    Distributed Video Coding (DVC) is a new paradigm in video coding, based on the Slepian-Wolf and Wyner-Ziv theorems. DVC offers a number of potential advantages: flexible partitioning of the complexity between the encoder and decoder, robustness to channel errors due to intrinsic joint source-channel coding, codec independent scalability, and multi-view coding without communications between the cameras. In this paper, we evaluate the performance of DVC in an error-prone wireless communication environment. We also present a hybrid spatial and temporal error concealment approach for DVC. Finally, we perform a comparison with a state-of-the-art AVC/H.264 video coding scheme in the presence of transmission errors.

  9. Layered Wyner-Ziv video coding.

    PubMed

    Xu, Qian; Xiong, Zixiang

    2006-12-01

    Following recent theoretical works on successive Wyner-Ziv coding (WZC), we propose a practical layered Wyner-Ziv video coder using the DCT, nested scalar quantization, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information at the decoder). Our main novelty is to use the base layer of a standard scalable video coder (e.g., MPEG-4/H.26L FGS or H.263+) as the decoder side information and perform layered WZC for quality enhancement. Similar to FGS coding, there is no performance difference between layered and monolithic WZC when the enhancement bitstream is generated in our proposed coder. Using an H.26L coded version as the base layer, experiments indicate that WZC gives slightly worse performance than FGS coding when the channel (for both the base and enhancement layers) is noiseless. However, when the channel is noisy, extensive simulations of video transmission over wireless networks conforming to the CDMA2000 1X standard show that H.26L base layer coding plus Wyner-Ziv enhancement layer coding are more robust against channel errors than H.26L FGS coding. These results demonstrate that layered Wyner-Ziv video coding is a promising new technique for video streaming over wireless networks. PMID:17153952

  10. Advanced medical video services through context-aware medical networks.

    PubMed

    Doukas, Charalampos N; Maglogiannis, Ilias; Pliakas, Thomas

    2007-01-01

    The aim of this paper is to present a framework for advanced medical video delivery services, through network and patient-state awareness. Under this scope a context-aware medical networking platform is described. The developed platform enables proper medical video data coding and transmission according to both a) network availability and/or quality and b) patient status, optimizing thus network performance and telediagnosis. An evaluation platform has been developed based on scalable H.264 coding of medical videos. Corresponding results of video transmission over a WiMax network have proved the effectiveness and efficiency of the platform providing proper video content delivery. PMID:18002643

  11. A robust coding scheme for packet video

    NASA Technical Reports Server (NTRS)

    Chen, Y. C.; Sayood, Khalid; Nelson, D. J.

    1991-01-01

    We present a layered packet video coding algorithm based on a progressive transmission scheme. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.

  12. A robust coding scheme for packet video

    NASA Technical Reports Server (NTRS)

    Chen, Yun-Chung; Sayood, Khalid; Nelson, Don J.

    1992-01-01

    A layered packet video coding algorithm based on a progressive transmission scheme is presented. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.

  13. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    PubMed Central

    Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions. PMID:26999741

  14. Geometric prediction structure for multiview video coding

    NASA Astrophysics Data System (ADS)

    Lee, Seok; Wey, Ho-Cheon; Park, Du-Sik

    2010-02-01

    One of the critical issues to successful service of 3D video is how to compress huge amount of multi-view video data efficiently. In this paper, we described about geometric prediction structure for multi-view video coding. By exploiting the geometric relations between each camera pose, we can make prediction pair which maximizes the spatial correlation of each view. To analyze the relationship of each camera pose, we defined the mathematical view center and view distance in 3D space. We calculated virtual center pose by getting mean rotation matrix and mean translation vector. We proposed an algorithm for establishing the geometric prediction structure based on view center and view distance. Using this prediction structure, inter-view prediction is performed to camera pair of maximum spatial correlation. In our prediction structure, we also considered the scalability in coding and transmitting the multi-view videos. Experiments are done using JMVC (Joint Multiview Video Coding) software on MPEG-FTV test sequences. Overall performance of proposed prediction structure is measured in the PSNR and subjective image quality measure such as PSPNR.

  15. Selective encryption for H.264/AVC video coding

    NASA Astrophysics Data System (ADS)

    Shi, Tuo; King, Brian; Salama, Paul

    2006-02-01

    Due to the ease with which digital data can be manipulated and due to the ongoing advancements that have brought us closer to pervasive computing, the secure delivery of video and images has become a challenging problem. Despite the advantages and opportunities that digital video provide, illegal copying and distribution as well as plagiarism of digital audio, images, and video is still ongoing. In this paper we describe two techniques for securing H.264 coded video streams. The first technique, SEH264Algorithm1, groups the data into the following blocks of data: (1) a block that contains the sequence parameter set and the picture parameter set, (2) a block containing a compressed intra coded frame, (3) a block containing the slice header of a P slice, all the headers of the macroblock within the same P slice, and all the luma and chroma DC coefficients belonging to the all the macroblocks within the same slice, (4) a block containing all the ac coefficients, and (5) a block containing all the motion vectors. The first three are encrypted whereas the last two are not. The second method, SEH264Algorithm2, relies on the use of multiple slices per coded frame. The algorithm searches the compressed video sequence for start codes (0x000001) and then encrypts the next N bits of data.

  16. Coding visual features extracted from video sequences.

    PubMed

    Baroffio, Luca; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2014-05-01

    Visual features are successfully exploited in several applications (e.g., visual search, object recognition and tracking, etc.) due to their ability to efficiently represent image content. Several visual analysis tasks require features to be transmitted over a bandwidth-limited network, thus calling for coding techniques to reduce the required bit budget, while attaining a target level of efficiency. In this paper, we propose, for the first time, a coding architecture designed for local features (e.g., SIFT, SURF) extracted from video sequences. To achieve high coding efficiency, we exploit both spatial and temporal redundancy by means of intraframe and interframe coding modes. In addition, we propose a coding mode decision based on rate-distortion optimization. The proposed coding scheme can be conveniently adopted to implement the analyze-then-compress (ATC) paradigm in the context of visual sensor networks. That is, sets of visual features are extracted from video frames, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast to the traditional compress-then-analyze (CTA) paradigm, in which video sequences acquired at a node are compressed and then sent to a central unit for further processing. In this paper, we compare these coding paradigms using metrics that are routinely adopted to evaluate the suitability of visual features in the context of content-based retrieval, object recognition, and tracking. Experimental results demonstrate that, thanks to the significant coding gains achieved by the proposed coding scheme, ATC outperforms CTA with respect to all evaluation metrics. PMID:24818244

  17. Fast prediction algorithm for multiview video coding

    NASA Astrophysics Data System (ADS)

    Abdelazim, Abdelrahman; Mein, Stephen James; Varley, Martin Roy; Ait-Boudaoud, Djamel

    2013-03-01

    The H.264/multiview video coding (MVC) standard has been developed to enable efficient coding for three-dimensional and multiple viewpoint video sequences. The inter-view statistical dependencies are utilized and an inter-view prediction is employed to provide more efficient coding; however, this increases the overall encoding complexity. Motion homogeneity is exploited here to selectively enable inter-view prediction, and to reduce complexity in the motion estimation (ME) and the mode selection processes. This has been accomplished by defining situations that relate macro-blocks' motion characteristics to the mode selection and the inter-view prediction processes. When comparing the proposed algorithm to the H.264/MVC reference software and other recent work, the experimental results demonstrate a significant reduction in ME time while maintaining similar rate-distortion performance.

  18. Practical distributed video coding in packet lossy channels

    NASA Astrophysics Data System (ADS)

    Qing, Linbo; Masala, Enrico; He, Xiaohai

    2013-07-01

    Improving error resilience of video communications over packet lossy channels is an important and tough task. We present a framework to optimize the quality of video communications based on distributed video coding (DVC) in practical packet lossy network scenarios. The peculiar characteristics of DVC indeed require a number of adaptations to take full advantage of its intrinsic robustness when dealing with data losses of typical real packet networks. This work proposes a new packetization scheme, an investigation of the best error-correcting codes to use in a noisy environment, a practical rate-allocation mechanism, which minimizes decoder feedback, and an improved side-information generation and reconstruction function. Performance comparisons are presented with respect to a conventional packet video communication using H.264/advanced video coding (AVC). Although currently the H.264/AVC rate-distortion performance in case of no loss is better than state-of-the-art DVC schemes, under practical packet lossy conditions, the proposed techniques provide better performance with respect to an H.264/AVC-based system, especially at high packet loss rates. Thus the error resilience of the proposed DVC scheme is superior to the one provided by H.264/AVC, especially in the case of transmission over packet lossy networks.

  19. Generalized parallelization methodology for video coding

    NASA Astrophysics Data System (ADS)

    Leung, Kwong-Keung; Yung, Nelson H. C.

    1998-12-01

    This paper describes a generalized parallelization methodology for mapping video coding algorithms onto a multiprocessing architecture, through systematic task decomposition, scheduling and performance analysis. It exploits data parallelism inherent in the coding process and performs task scheduling base on task data size and access locality with the aim to hide as much communication overhead as possible. Utilizing Petri-nets and task graphs for representation and analysis, the method enables parallel video frame capturing, buffering and encoding without extra communication overhead. The theoretical speedup analysis indicates that this method offers excellent communication hiding, resulting in system efficiency well above 90%. A H.261 video encoder has been implemented on a TMS320C80 system using this method, and its performance was measured. The theoretical and measured performances are similar in that the measured speedup of the H.261 is 3.67 and 3.76 on four PP for QCIF and 352 X 240 video, respectively. They correspond to frame rates of 30.7 frame per second (fps) and 9.25 fps, and system efficiency of 91.8% and 94% respectively. As it is, this method is particularly efficient for platforms with small number of parallel processors.

  20. Scalable video coding in frequency domain

    NASA Astrophysics Data System (ADS)

    Civanlar, Mehmet R.; Puri, Atul

    1992-11-01

    Scalable video coding is important in a number of applications where video needs to be decoded and displayed at a variety of resolution scales. It is more efficient than simulcasting, in which all desired resolution scales are coded totally independent of one another within the constraint of a fixed available bandwidth. In this paper, we focus on scalability using the frequency domain approach. We employ the framework proposed for the ongoing second phase of Motion Picture Experts Group (MPEG-2) standard to study the performance of one such scheme and investigate improvements aimed at increasing its efficiency. Practical issues related to multiplexing of encoded data of various resolution scales to facilitate decoding are considered. Simulations are performed to investigate the potential of a chosen frequency domain scheme. Various prospects and limitations are also discussed.

  1. Multirate 3-D subband coding of video.

    PubMed

    Taubman, D; Zakhor, A

    1994-01-01

    We propose a full color video compression strategy, based on 3-D subband coding with camera pan compensation, to generate a single embedded bit stream supporting multiple decoder display formats and a wide, finely gradated range of bit rates. An experimental implementation of our algorithm produces a single bit stream, from which suitable subsets are extracted to be compatible with many decoder frame sizes and frame rates and to satisfy transmission bandwidth constraints ranging from several tens of kilobits per second to several megabits per second. Reconstructed video quality from any of these bit stream subsets is often found to exceed that obtained from an MPEG-1 implementation, operated with equivalent bit rate constraints, in both perceptual quality and mean squared error. In addition, when restricted to 2-D, the algorithm produces some of the best results available in still image compression. PMID:18291953

  2. Overview of MPEG internet video coding

    NASA Astrophysics Data System (ADS)

    Wang, R. G.; Li, G.; Park, S.; Kim, J.; Huang, T.; Jang, E. S.; Gao, W.

    2015-09-01

    MPEG has produced standards that have provided the industry with the best video compression technologies. In order to address the diversified needs of the Internet, MPEG issued the Call for Proposals (CfP) for internet video coding in July, 2011. It is anticipated that any patent declaration associated with the Baseline Profile of this standard will indicate that the patent owner is prepared to grant a free of charge license to an unrestricted number of applicants on a worldwide, non-discriminatory basis and under other reasonable terms and conditions to make, use, and sell implementations of the Baseline Profile of this standard in accordance with the ITU-T/ITU-R/ISO/IEC Common Patent Policy. Three different codecs had responded to the CfP, which are WVC, VCB and IVC. WVC was proposed jointly by Apple, Cisco, Fraunhofer HHI, Magnum Semiconductor, Polycom and RIM etc. it's in fact AVC baseline. VCB was proposed by Google, and it's in fact VP8. IVC was proposed by several Universities (Peking University, Tsinghua University, Zhejiang University, Hanyang University and Korea Aerospace University etc.) and its coding tools was developed from Zero. In this paper, we give an overview of the coding tools in IVC, and evaluate its performance by comparing it with WVC, VCB and AVC High Profile.

  3. Why Video? How Technology Advances Method

    ERIC Educational Resources Information Center

    Downing, Martin J., Jr.

    2008-01-01

    This paper reports on the use of video to enhance qualitative research. Advances in technology have improved our ability to capture lived experiences through visual means. I reflect on my previous work with individuals living with HIV/AIDS, the results of which are described in another paper, to evaluate the effectiveness of video as a medium that…

  4. Standards-based approaches to 3D and multiview video coding

    NASA Astrophysics Data System (ADS)

    Sullivan, Gary J.

    2009-08-01

    The extension of video applications to enable 3D perception, which typically is considered to include a stereo viewing experience, is emerging as a mass market phenomenon, as is evident from the recent prevalence of 3D major cinema title releases. For high quality 3D video to become a commonplace user experience beyond limited cinema distribution, adoption of an interoperable coded 3D digital video format will be needed. Stereo-view video can also be studied as a special case of the more general technologies of multiview and "free-viewpoint" video systems. The history of standardization work on this topic is actually richer than people may typically realize. The ISO/IEC Moving Picture Experts Group (MPEG), in particular, has been developing interoperability standards to specify various such coding schemes since the advent of digital video as we know it. More recently, the ITU-T Visual Coding Experts Group (VCEG) has been involved as well in the Joint Video Team (JVT) work on development of 3D features for H.264/14496-10 Advanced Video Coding, including Multiview Video Coding (MVC) extensions. This paper surveys the prior, ongoing, and anticipated future standardization efforts on this subject to provide an overview and historical perspective on feasible approaches to 3D and multiview video coding.

  5. The emerging High Efficiency Video Coding standard (HEVC)

    NASA Astrophysics Data System (ADS)

    Raja, Gulistan; Khan, Awais

    2013-12-01

    High definition video (HDV) is becoming popular day by day. This paper describes the performance analysis of latest upcoming video standard known as High Efficiency Video Coding (HEVC). HEVC is designed to fulfil all the requirements for future high definition videos. In this paper, three configurations (intra only, low delay and random access) of HEVC are analyzed using various 480p, 720p and 1080p high definition test video sequences. Simulation results show the superior objective and subjective quality of HEVC.

  6. Bit allocation for joint coding of multiple video programs

    NASA Astrophysics Data System (ADS)

    Wang, Limin; Vincent, Andre

    1997-01-01

    By dynamically distributing the channel capacity among video programs according to their respective scene complexities, joint coding has been a shown to be more efficient than independent coding for compression of multiple video programs. This paper examines the bit allocation issue for joint coding of multiple video programs and provides a bit allocation strategy that results in uniform picture quality among programs as will as within a program.

  7. Depth-based coding of MVD data for 3D video extension of H.264/AVC

    NASA Astrophysics Data System (ADS)

    Rusanovskyy, Dmytro; Hannuksela, Miska M.; Su, Wenyi

    2013-06-01

    This paper describes a novel approach of using depth information for advanced coding of associated video data in Multiview Video plus Depth (MVD)-based 3D video systems. As a possible implementation of this conception, we describe two coding tools that have been developed for H.264/AVC based 3D Video Codec as response to Moving Picture Experts Group (MPEG) Call for Proposals (CfP). These tools are Depth-based Motion Vector Prediction (DMVP) and Backward View Synthesis Prediction (BVSP). Simulation results conducted under JCT-3V/MPEG 3DV Common Test Conditions show, that proposed in this paper tools reduce bit rate of coded video data by 15% of average delta bit rate reduction, which results in 13% of bit rate savings on total for the MVD data over the state-of-the-art MVC+D coding. Moreover, presented in this paper conception of depth-based coding of video has been further developed by MPEG 3DV and JCT-3V and this work resulted in even higher compression efficiency, bringing about 20% of delta bit rate reduction on total for coded MVD data over the reference MVC+D coding. Considering significant gains, proposed in this paper coding approach can be beneficial for development of new 3D video coding standards. [Figure not available: see fulltext.

  8. Chroma sampling and modulation techniques in high dynamic range video coding

    NASA Astrophysics Data System (ADS)

    Dai, Wei; Krishnan, Madhu; Topiwala, Pankaj

    2015-09-01

    High Dynamic Range and Wide Color Gamut (HDR/WCG) Video Coding is an area of intense research interest in the engineering community, for potential near-term deployment in the marketplace. HDR greatly enhances the dynamic range of video content (up to 10,000 nits), as well as broadens the chroma representation (BT.2020). The resulting content offers new challenges in its coding and transmission. The Moving Picture Experts Group (MPEG) of the International Standards Organization (ISO) is currently exploring coding efficiency and/or the functionality enhancements of the recently developed HEVC video standard for HDR and WCG content. FastVDO has developed an advanced approach to coding HDR video, based on splitting the HDR signal into a smoothed luminance (SL) signal, and an associated base signal (B). Both signals are then chroma downsampled to YFbFr 4:2:0 signals, using advanced resampling filters, and coded using the Main10 High Efficiency Video Coding (HEVC) standard, which has been developed jointly by ISO/IEC MPEG and ITU-T WP3/16 (VCEG). Our proposal offers both efficient coding, and backwards compatibility with the existing HEVC Main10 Profile. That is, an existing Main10 decoder can produce a viewable standard dynamic range video, suitable for existing screens. Subjective tests show visible improvement over the anchors. Objective tests show a sizable gain of over 25% in PSNR (RGB domain) on average, for a key set of test clips selected by the ISO/MPEG committee.

  9. An Advanced Video Sensor for Automated Docking

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Bryan, Thomas C.; Book, Michael L.; Roe, Fred (Technical Monitor)

    2001-01-01

    This paper describes the current developments in video-based sensors at the Marshall Space Flight Center. The Advanced Video Guidance Sensor is the latest in a line of video-based sensors designed for use in automated docking systems. The X-33, X-34, X-38, and X-40 are all designed to be unpiloted vehicles; such vehicles will require a sensor system that will provide adequate data for the vehicle to accomplish its mission. One of the primary tasks planned for re-usable launch vehicles is to resupply the space station. In order to approach the space station in a self-guided manner, the vehicle must have a reliable and accurate sensor system to provide relative position and attitude information between the vehicle and the space station. The Advanced Video Guidance Sensor is being designed and built to meet this requirement, as well as requirements for other vehicles docking to a variety of target spacecraft. The Advanced Video Guidance Sensor is being designed to allow range and bearing information to be measured at ranges up to 2 km. The sensor will measure 6-degree-of-freedom information (relative positions and attitudes) from approximately 40 meters all the way in to final contact (approximately 1 meter range). The sensor will have a data output rate of 20 Hz during tracking mode, and will be able to acquire a target within one half of a second. The prototype of the sensor will be near completion at the time of the conference.

  10. Coding tools investigation for next generation video coding based on HEVC

    NASA Astrophysics Data System (ADS)

    Chen, Jianle; Chen, Ying; Karczewicz, Marta; Li, Xiang; Liu, Hongbin; Zhang, Li; Zhao, Xin

    2015-09-01

    The new state-of-the-art video coding standard, H.265/HEVC, has been finalized in 2013 and it achieves roughly 50% bit rate saving compared to its predecessor, H.264/MPEG-4 AVC. This paper provides the evidence that there is still potential for further coding efficiency improvements. A brief overview of HEVC is firstly given in the paper. Then, our improvements on each main module of HEVC are presented. For instance, the recursive quadtree block structure is extended to support larger coding unit and transform unit. The motion information prediction scheme is improved by advanced temporal motion vector prediction, which inherits the motion information of each small block within a large block from a temporal reference picture. Cross component prediction with linear prediction model improves intra prediction and overlapped block motion compensation improves the efficiency of inter prediction. Furthermore, coding of both intra and inter prediction residual is improved by adaptive multiple transform technique. Finally, in addition to deblocking filter and SAO, adaptive loop filter is applied to further enhance the reconstructed picture quality. This paper describes above-mentioned techniques in detail and evaluates their coding performance benefits based on the common test condition during HEVC development. The simulation results show that significant performance improvement over HEVC standard can be achieved, especially for the high resolution video materials.

  11. Advances in digital video for electronic media.

    PubMed

    McAfooes, J A

    1997-01-01

    From media's early days of film strips and records, to today's multimedia CD-ROMs, nurses have embraced educational tools. Today, the capabilities of these tools have placed a tremendous demand for providing information any time, any where. This has led to increasing digitization of sights and sounds. Once digitized, this information can travel over information highways made up of telephone lines, fiberoptic cables, microwaves and satellites, or it can be stored on magnetic and optical media. Technological advances have made it possible for computer users to create, store and retrieve high quality digital still and moving video and audio for inclusion in electronic media. Methods for digitizing include capturing and converting the information with cameras, scanners and capture boards. Digital video compression/decompression (codec) standards vary in quality. Potential uses of digital video abound including video on demand, videoconferencing, distance learning, telemedicine, on-line education and computer-based training. Examples illustrating the differences in digital video formats will be shown during the presentation. PMID:10175444

  12. Hardware-based JPEG 2000 video coding system

    NASA Astrophysics Data System (ADS)

    Schuchter, Arthur R.; Uhl, Andreas

    2007-02-01

    In this paper, we discuss a hardware based low complexity JPEG 2000 video coding system. The hardware system is based on a software simulation system, where temporal redundancy is exploited by coding of differential frames which are arranged in an adaptive GOP structure whereby the GOP structure itself is determined by statistical analysis of differential frames. We present a hardware video coding architecture which applies this inter-frame coding system to a Digital Signal Processor (DSP). The system consists mainly of a microprocessor (ADSP-BF533 Blackfin Processor) and a JPEG 2000 chip (ADV202).

  13. An HEVC extension for spatial and quality scalable video coding

    NASA Astrophysics Data System (ADS)

    Hinz, Tobias; Helle, Philipp; Lakshman, Haricharan; Siekmann, Mischa; Stegemann, Jan; Schwarz, Heiko; Marpe, Detlev; Wiegand, Thomas

    2013-02-01

    This paper describes an extension of the upcoming High Efficiency Video Coding (HEVC) standard for supporting spatial and quality scalable video coding. Besides scalable coding tools known from scalable profiles of prior video coding standards such as H.262/MPEG-2 Video and H.264/MPEG-4 AVC, the proposed scalable HEVC extension includes new coding tools that further improve the coding efficiency of the enhancement layer. In particular, new coding modes by which base and enhancement layer signals are combined for forming an improved enhancement layer prediction signal have been added. All scalable coding tools have been integrated in a way that the low-level syntax and decoding process of HEVC remain unchanged to a large extent. Simulation results for typical application scenarios demonstrate the effectiveness of the proposed design. For spatial and quality scalable coding with two layers, bit-rate savings of about 20-30% have been measured relative to simulcasting the layers, which corresponds to a bit-rate overhead of about 5-15% relative to single-layer coding of the enhancement layer.

  14. 3D high-efficiency video coding for multi-view video and depth data.

    PubMed

    Muller, Karsten; Schwarz, Heiko; Marpe, Detlev; Bartnik, Christian; Bosse, Sebastian; Brust, Heribert; Hinz, Tobias; Lakshman, Haricharan; Merkle, Philipp; Rhee, Franz Hunn; Tech, Gerhard; Winken, Martin; Wiegand, Thomas

    2013-09-01

    This paper describes an extension of the high efficiency video coding (HEVC) standard for coding of multi-view video and depth data. In addition to the known concept of disparity-compensated prediction, inter-view motion parameter, and inter-view residual prediction for coding of the dependent video views are developed and integrated. Furthermore, for depth coding, new intra coding modes, a modified motion compensation and motion vector coding as well as the concept of motion parameter inheritance are part of the HEVC extension. A novel encoder control uses view synthesis optimization, which guarantees that high quality intermediate views can be generated based on the decoded data. The bitstream format supports the extraction of partial bitstreams, so that conventional 2D video, stereo video, and the full multi-view video plus depth format can be decoded from a single bitstream. Objective and subjective results are presented, demonstrating that the proposed approach provides 50% bit rate savings in comparison with HEVC simulcast and 20% in comparison with a straightforward multi-view extension of HEVC without the newly developed coding tools. PMID:23715605

  15. Semantic-preload video model based on VOP coding

    NASA Astrophysics Data System (ADS)

    Yang, Jianping; Zhang, Jie; Chen, Xiangjun

    2013-03-01

    In recent years, in order to reduce semantic gap which exists between high-level semantics and low-level features of video when the human understanding image or video, people mostly try the method of video annotation where in signal's downstream, namely further (again) attach labels to the content in video-database. Few people focus on the idea that: Use limited interaction and the means of comprehensive segmentation (including optical technologies) from the front-end of collection of video information (i.e. video camera), with video semantics analysis technology and corresponding concepts sets (i.e. ontology) which belong in a certain domain, as well as story shooting script and the task description of scene shooting etc; Apply different-level semantic descriptions to enrich the attributes of video object and the attributes of image region, then forms a new video model which is based on Video Object Plan (VOP) Coding. This model has potential intellectualized features, and carries a large amount of metadata, and embedded intermediate-level semantic concept into every object. This paper focuses on the latter, and presents a framework of a new video model. At present, this new video model is temporarily named "Video Model of Semantic-Preloaded or Semantic-Preload Video Model (simplified into VMoSP or SPVM)". This model mainly researches how to add labeling to video objects and image regions in real time, here video object and image region are usually used intermediate semantic labeling, and this work is placed on signal's upstream (i.e. video capture production stage). Because of the research needs, this paper also tries to analyses the hierarchic structure of video, and divides the hierarchic structure into nine hierarchy semantic levels, of course, this nine hierarchy only involved in video production process. In addition, the paper also point out that here semantic level tagging work (i.e. semantic preloading) only refers to the four middle-level semantic. All in

  16. Study and simulation of low rate video coding schemes

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Yun-Chung; Kipp, G.

    1992-01-01

    The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design.

  17. Advanced High-Definition Video Cameras

    NASA Technical Reports Server (NTRS)

    Glenn, William

    2007-01-01

    A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

  18. Object-adaptive depth compensated inter prediction for depth video coding in 3D video system

    NASA Astrophysics Data System (ADS)

    Kang, Min-Koo; Lee, Jaejoon; Lim, Ilsoon; Ho, Yo-Sung

    2011-01-01

    Nowadays, the 3D video system using the MVD (multi-view video plus depth) data format is being actively studied. The system has many advantages with respect to virtual view synthesis such as an auto-stereoscopic functionality, but compression of huge input data remains a problem. Therefore, efficient 3D data compression is extremely important in the system, and problems of low temporal consistency and viewpoint correlation should be resolved for efficient depth video coding. In this paper, we propose an object-adaptive depth compensated inter prediction method to resolve the problems where object-adaptive mean-depth difference between a current block, to be coded, and a reference block are compensated during inter prediction. In addition, unique properties of depth video are exploited to reduce side information required for signaling decoder to conduct the same process. To evaluate the coding performance, we have implemented the proposed method into MVC (multiview video coding) reference software, JMVC 8.2. Experimental results have demonstrated that our proposed method is especially efficient for depth videos estimated by DERS (depth estimation reference software) discussed in the MPEG 3DV coding group. The coding gain was up to 11.69% bit-saving, and it was even increased when we evaluated it on synthesized views of virtual viewpoints.

  19. SCTP as scalable video coding transport

    NASA Astrophysics Data System (ADS)

    Ortiz, Jordi; Graciá, Eduardo Martínez; Skarmeta, Antonio F.

    2013-12-01

    This study presents an evaluation of the Stream Transmission Control Protocol (SCTP) for the transport of the scalable video codec (SVC), proposed by MPEG as an extension to H.264/AVC. Both technologies fit together properly. On the one hand, SVC permits to split easily the bitstream into substreams carrying different video layers, each with different importance for the reconstruction of the complete video sequence at the receiver end. On the other hand, SCTP includes features, such as the multi-streaming and multi-homing capabilities, that permit to transport robustly and efficiently the SVC layers. Several transmission strategies supported on baseline SCTP and its concurrent multipath transfer (CMT) extension are compared with the classical solutions based on the Transmission Control Protocol (TCP) and the Realtime Transmission Protocol (RTP). Using ns-2 simulations, it is shown that CMT-SCTP outperforms TCP and RTP in error-prone networking environments. The comparison is established according to several performance measurements, including delay, throughput, packet loss, and peak signal-to-noise ratio of the received video.

  20. Scalable video transmission over Rayleigh fading channels using LDPC codes

    NASA Astrophysics Data System (ADS)

    Bansal, Manu; Kondi, Lisimachos P.

    2005-03-01

    In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.

  1. CCTV Video Analytics: Recent Advances and Limitations

    NASA Astrophysics Data System (ADS)

    Velastin, Sergio A.

    There has been a significant increase in the number of CCTV cameras in public and private places worldwide. The cost of monitoring these cameras manually and of reviewing recorded video is prohibitive and therefore manual systems tend to be used mainly reactively with only a small fraction of the cameras being monitored at any given time. There is a need to automate at least simple observation tasks through computer vision, a functionality that has become known popularly as "video analytics". The large size of CCTV systems and the requirement of high detection rates and low false alarms are major challenges. This paper illustrates some of the recent efforts reported in the literature, highlighting advances and pointing out important limitations.

  2. Joint-source-channel coding scheme for scalable video-coding-based digital video broadcasting, second generation satellite broadcasting system

    NASA Astrophysics Data System (ADS)

    Seo, Kwang-Deok; Chi, Won Sup; Lee, In Ki; Chang, Dae-Ig

    2010-10-01

    We propose a joint-source-channel coding (JSCC) scheme that can provide and sustain high-quality video service in spite of deteriorated transmission channel conditions of the second generation of the digital video broadcasting (DVB-S2) satellite broadcasting service. Especially by combining the layered characteristics of the SVC (scalable video coding) video and the robust channel coding capability of LDPC (low-density parity check) employed for DVB-S2, a new concept of JSCC for digital satellite broadcasting service is developed. Rain attenuation in high-frequency bands such as the Ka band is a major factor for lowering the link capacity in satellite broadcasting service. Therefore, it is necessary to devise a new technology to dynamically manage the rain attenuation by adopting a JSCC scheme that can apply variable code rates for both source and channel coding. For this purpose, we develop a JSCC scheme by combining SVC and LDPC, and prove the performance of the proposed JSCC scheme by extensive simulations where SVC coded video is transmitted over various error-prone channels with AWGN (additive white Gaussian noise) patterns in DVB-S2 broadcasting service.

  3. Coding scheme for wireless video transport with reduced frame skipping

    NASA Astrophysics Data System (ADS)

    Aramvith, Supavadee; Sun, Ming-Ting

    2000-05-01

    We investigate the scenario of using the Automatic Repeat reQuest (ARQ) retransmission scheme for two-way low bit-rate video communications over wireless Rayleigh fading channels. We show that during the retransmission of error packets, due to the reduced channel throughput, the video encoder buffer may fill-up quickly and cause the TMN8 rate-control algorithm to significantly reduce the bits allocated to each video frame. This results in Peak Signal-to-Noise Ratio (PSNR) degradation and many skipper frames. To reduce the number of frames skipped, in this paper we propose a coding scheme which takes into consideration the effects of the video buffer fill-up, an a priori channel model, the channel feedback information, and hybrid ARQ/FEC. The simulation results indicate that our proposed scheme encode the video sequences with much fewer frame skipping and with higher PSNR compared to H.263 TMN8.

  4. EZBC video streaming with channel coding and error concealment

    NASA Astrophysics Data System (ADS)

    Bajic, Ivan V.; Woods, John W.

    2003-06-01

    In this text we present a system for streaming video content encoded using the motion-compensated Embedded Zero Block Coder (EZBC). The system incorporates unequal loss protection in the form of multiple description FEC (MD-FEC) coding, which provides adequate protection for the embedded video bitstream when the loss process is not very bursty. The adverse effects of burst losses are reduced using a novel motion-compensated error concealmet method.

  5. Control Software for Advanced Video Guidance Sensor

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Book, Michael L.; Bryan, Thomas C.

    2006-01-01

    Embedded software has been developed specifically for controlling an Advanced Video Guidance Sensor (AVGS). A Video Guidance Sensor is an optoelectronic system that provides guidance for automated docking of two vehicles. Such a system includes pulsed laser diodes and a video camera, the output of which is digitized. From the positions of digitized target images and known geometric relationships, the relative position and orientation of the vehicles are computed. The present software consists of two subprograms running in two processors that are parts of the AVGS. The subprogram in the first processor receives commands from an external source, checks the commands for correctness, performs commanded non-image-data-processing control functions, and sends image data processing parts of commands to the second processor. The subprogram in the second processor processes image data as commanded. Upon power-up, the software performs basic tests of functionality, then effects a transition to a standby mode. When a command is received, the software goes into one of several operational modes (e.g. acquisition or tracking). The software then returns, to the external source, the data appropriate to the command.

  6. Conditional entropy coding of DCT coefficients for video compression

    NASA Astrophysics Data System (ADS)

    Sipitca, Mihai; Gillman, David W.

    2000-04-01

    We introduce conditional Huffman encoding of DCT run-length events to improve the coding efficiency of low- and medium-bit rate video compression algorithms. We condition the Huffman code for each run-length event on a classification of the current block. We classify blocks according to coding mode and signal type, which are known to the decoder, and according to energy, which the decoder must receive as side information. Our classification schemes improve coding efficiency with little or no increased running time and some increased memory use.

  7. A Robust Model-Based Coding Technique for Ultrasound Video

    NASA Technical Reports Server (NTRS)

    Docef, Alen; Smith, Mark J. T.

    1995-01-01

    This paper introduces a new approach to coding ultrasound video, the intended application being very low bit rate coding for transmission over low cost phone lines. The method exploits both the characteristic noise and the quasi-periodic nature of the signal. Data compression ratios between 250:1 and 1000:1 are shown to be possible, which is sufficient for transmission over ISDN and conventional phone lines. Preliminary results show this approach to be promising for remote ultrasound examinations.

  8. Template based illumination compensation algorithm for multiview video coding

    NASA Astrophysics Data System (ADS)

    Li, Xiaoming; Jiang, Lianlian; Ma, Siwei; Zhao, Debin; Gao, Wen

    2010-07-01

    Recently multiview video coding (MVC) standard has been finalized as an extension of H.264/AVC by Joint Video Team (JVT). In the project Joint Multiview Video Model (JMVM) for the standardization, illumination compensation (IC) is adopted as a useful tool. In this paper, a novel illumination compensation algorithm based on template is proposed. The basic idea of the algorithm is that the illumination of the current block has a strong correlation with its adjacent template. Based on this idea, firstly a template based illumination compensation method is presented, and then a template models selection strategy is devised to improve the illumination compensation performance. The experimental results show that the proposed algorithm can improve the coding efficiency significantly.

  9. Next Generation Advanced Video Guidance Sensor

    NASA Technical Reports Server (NTRS)

    Lee, Jimmy; Spencer, Susan; Bryan, Tom; Johnson, Jimmie; Robertson, Bryan

    2008-01-01

    The first autonomous rendezvous and docking in the history of the U.S. Space Program was successfully accomplished by Orbital Express, using the Advanced Video Guidance Sensor (AVGS) as the primary docking sensor. The United States now has a mature and flight proven sensor technology for supporting Crew Exploration Vehicles (CEV) and Commercial Orbital Transport. Systems (COTS) Automated Rendezvous and Docking (AR&D). AVGS has a proven pedigree, based on extensive ground testing and flight demonstrations. The AVGS on the Demonstration of Autonomous Rendezvous Technology (DART)mission operated successfully in "spot mode" out to 2 km. The first generation rendezvous and docking sensor, the Video Guidance Sensor (VGS), was developed and successfully flown on Space Shuttle flights in 1997 and 1998. Parts obsolescence issues prevent the construction of more AVGS. units, and the next generation sensor must be updated to support the CEV and COTS programs. The flight proven AR&D sensor is being redesigned to update parts and add additional. capabilities for CEV and COTS with the development of the Next, Generation AVGS (NGAVGS) at the Marshall Space Flight Center. The obsolete imager and processor are being replaced with new radiation tolerant parts. In addition, new capabilities might include greater sensor range, auto ranging, and real-time video output. This paper presents an approach to sensor hardware trades, use of highly integrated laser components, and addresses the needs of future vehicles that may rendezvous and dock with the International Space Station (ISS) and other Constellation vehicles. It will also discuss approaches for upgrading AVGS to address parts obsolescence, and concepts for minimizing the sensor footprint, weight, and power requirements. In addition, parts selection and test plans for the NGAVGS will be addressed to provide a highly reliable flight qualified sensor. Expanded capabilities through innovative use of existing capabilities will also be

  10. Low complexity video coding using SMPTE VC-2

    NASA Astrophysics Data System (ADS)

    Borer, Tim

    2013-09-01

    Low complexity video coding addresses different applications, and is complementary to, video coding for delivery to the end user. Delivery codecs, such as the MPEG/ITU standards, provide very high compression ratios, but require high complexity and high latency. Some applications, by contrast, need the opposite characteristics of low complexity and low latency at low compression ratios. This paper discusses the applications and requirements of low complexity coding and, after discussing the prior art, describes the standard VC-2 (SMPTE 2042) codec, which is a wavelet codec designed for low complexity and ultra-low latency. VC-2 provides a wide range of coding parameters and compression ratios, allowing it to address applications such as texture coding, lossless and high dynamic range coding. In particular this paper describes the results for the low complexity coding parameters of 2 and 3 level Haar and LeGall wavelet kernels, for image regions of 4x4 and 8x8 pixels with both luma/color difference signals and RGB. The paper indicates the quality that may be achieved at various compression ratios and also clearly shows the benefit of coding luma and color components rather than RGB.

  11. Advanced Video Guidance Sensor (AVGS) Development Testing

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Johnston, Albert S.; Bryan, Thomas C.; Book, Michael L.

    2004-01-01

    NASA's Marshall Space Flight Center was the driving force behind the development of the Advanced Video Guidance Sensor, an active sensor system that provides near-range sensor data as part of an automatic rendezvous and docking system. The sensor determines the relative positions and attitudes between the active sensor and the passive target at ranges up to 300 meters. The AVGS uses laser diodes to illuminate retro-reflectors in the target, a solid-state camera to detect the return from the target, and image capture electronics and a digital signal processor to convert the video information into the relative positions and attitudes. The AVGS will fly as part of the Demonstration of Autonomous Rendezvous Technologies (DART) in October, 2004. This development effort has required a great deal of testing of various sorts at every phase of development. Some of the test efforts included optical characterization of performance with the intended target, thermal vacuum testing, performance tests in long range vacuum facilities, EMI/EMC tests, and performance testing in dynamic situations. The sensor has been shown to track a target at ranges of up to 300 meters, both in vacuum and ambient conditions, to survive and operate during the thermal vacuum cycling specific to the DART mission, to handle EM1 well, and to perform well in dynamic situations.

  12. Layered Low-Density Generator Matrix Codes for Super High Definition Scalable Video Coding System

    NASA Astrophysics Data System (ADS)

    Tonomura, Yoshihide; Shirai, Daisuke; Nakachi, Takayuki; Fujii, Tatsuya; Kiya, Hitoshi

    In this paper, we introduce layered low-density generator matrix (Layered-LDGM) codes for super high definition (SHD) scalable video systems. The layered-LDGM codes maintain the correspondence relationship of each layer from the encoder side to the decoder side. This resulting structure supports partial decoding. Furthermore, the proposed layered-LDGM codes create highly efficient forward error correcting (FEC) data by considering the relationship between each scalable component. Therefore, the proposed layered-LDGM codes raise the probability of restoring the important components. Simulations show that the proposed layered-LDGM codes offer better error resiliency than the existing method which creates FEC data for each scalable component independently. The proposed layered-LDGM codes support partial decoding and raise the probability of restoring the base component. These characteristics are very suitable for scalable video coding systems.

  13. Dynamic algorithm for correlation noise estimation in distributed video coding

    NASA Astrophysics Data System (ADS)

    Thambu, Kuganeswaran; Fernando, Xavier; Guan, Ling

    2010-01-01

    Low complexity encoders at the expense of high complexity decoders are advantageous in wireless video sensor networks. Distributed video coding (DVC) achieves the above complexity balance, where the receivers compute Side information (SI) by interpolating the key frames. Side information is modeled as a noisy version of input video frame. In practise, correlation noise estimation at the receiver is a complex problem, and currently the noise is estimated based on a residual variance between pixels of the key frames. Then the estimated (fixed) variance is used to calculate the bit-metric values. In this paper, we have introduced the new variance estimation technique that rely on the bit pattern of each pixel, and it is dynamically calculated over the entire motion environment which helps to calculate the soft-value information required by the decoder. Our result shows that the proposed bit based dynamic variance estimation significantly improves the peak signal to noise ratio (PSNR) performance.

  14. A Watermarking Scheme for High Efficiency Video Coding (HEVC)

    PubMed Central

    Swati, Salahuddin; Hayat, Khizar; Shahid, Zafar

    2014-01-01

    This paper presents a high payload watermarking scheme for High Efficiency Video Coding (HEVC). HEVC is an emerging video compression standard that provides better compression performance as compared to its predecessor, i.e. H.264/AVC. Considering that HEVC may will be used in a variety of applications in the future, the proposed algorithm has a high potential of utilization in applications involving broadcast and hiding of metadata. The watermark is embedded into the Quantized Transform Coefficients (QTCs) during the encoding process. Later, during the decoding process, the embedded message can be detected and extracted completely. The experimental results show that the proposed algorithm does not significantly affect the video quality, nor does it escalate the bitrate. PMID:25144455

  15. A robust low-rate coding scheme for packet video

    NASA Technical Reports Server (NTRS)

    Chen, Y. C.; Sayood, Khalid; Nelson, D. J.; Arikan, E. (Editor)

    1991-01-01

    Due to the rapidly evolving field of image processing and networking, video information promises to be an important part of telecommunication systems. Although up to now video transmission has been transported mainly over circuit-switched networks, it is likely that packet-switched networks will dominate the communication world in the near future. Asynchronous transfer mode (ATM) techniques in broadband-ISDN can provide a flexible, independent and high performance environment for video communication. For this paper, the network simulator was used only as a channel in this simulation. Mixture blocking coding with progressive transmission (MBCPT) has been investigated for use over packet networks and has been found to provide high compression rate with good visual performance, robustness to packet loss, tractable integration with network mechanics and simplicity in parallel implementation.

  16. Advanced Video Data-Acquisition System For Flight Research

    NASA Technical Reports Server (NTRS)

    Miller, Geoffrey; Richwine, David M.; Hass, Neal E.

    1996-01-01

    Advanced video data-acquisition system (AVDAS) developed to satisfy variety of requirements for in-flight video documentation. Requirements range from providing images for visualization of airflows around fighter airplanes at high angles of attack to obtaining safety-of-flight documentation. F/A-18 AVDAS takes advantage of very capable systems like NITE Hawk forward-looking infrared (FLIR) pod and recent video developments like miniature charge-couple-device (CCD) color video cameras and other flight-qualified video hardware.

  17. Robust video transmission with distributed source coded auxiliary channel.

    PubMed

    Wang, Jiajun; Majumdar, Abhik; Ramchandran, Kannan

    2009-12-01

    We propose a novel solution to the problem of robust, low-latency video transmission over lossy channels. Predictive video codecs, such as MPEG and H.26x, are very susceptible to prediction mismatch between encoder and decoder or "drift" when there are packet losses. These mismatches lead to a significant degradation in the decoded quality. To address this problem, we propose an auxiliary codec system that sends additional information alongside an MPEG or H.26x compressed video stream to correct for errors in decoded frames and mitigate drift. The proposed system is based on the principles of distributed source coding and uses the (possibly erroneous) MPEG/H.26x decoder reconstruction as side information at the auxiliary decoder. The distributed source coding framework depends upon knowing the statistical dependency (or correlation) between the source and the side information. We propose a recursive algorithm to analytically track the correlation between the original source frame and the erroneous MPEG/H.26x decoded frame. Finally, we propose a rate-distortion optimization scheme to allocate the rate used by the auxiliary encoder among the encoding blocks within a video frame. We implement the proposed system and present extensive simulation results that demonstrate significant gains in performance both visually and objectively (on the order of 2 dB in PSNR over forward error correction based solutions and 1.5 dB in PSNR over intrarefresh based solutions for typical scenarios) under tight latency constraints. PMID:19703801

  18. Motion Information Inferring Scheme for Multi-View Video Coding

    NASA Astrophysics Data System (ADS)

    Koo, Han-Suh; Jeon, Yong-Joon; Jeon, Byeong-Moon

    This letter proposes a motion information inferring scheme for multi-view video coding motivated by the idea that the aspect of motion vector between the corresponding positions in the neighboring view pair is quite similar. The proposed method infers the motion information from the corresponding macroblock in the neighboring view after RD optimization with the existing prediction modes. This letter presents evaluation showing that the method significantly enhances the efficiency especially at high bit rates.

  19. Picturewise inter-view prediction selection for multiview video coding

    NASA Astrophysics Data System (ADS)

    Huo, Junyan; Chang, Yilin; Li, Ming; Yang, Haitao

    2010-11-01

    Inter-view prediction is introduced in multiview video coding (MVC) to exploit the inter-view correlation. Statistical analyses show that the coding gain benefited from inter-view prediction is unequal among pictures. On the basis of this observation, a picturewise interview prediction selection scheme is proposed. This scheme employs a novel inter-view prediction selection criterion to determine whether it is necessary to apply inter-view prediction to the current coding picture. This criterion is derived from the available coding information of the temporal reference pictures. Experimental results show that the proposed scheme can improve the performance of MVC with a comprehensive consideration of compression efficiency, computational complexity, and random access ability.

  20. Distributed Coding/Decoding Complexity in Video Sensor Networks

    PubMed Central

    Cordeiro, Paulo J.; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality. PMID:22736972

  1. Improving Intra Prediction in High-Efficiency Video Coding.

    PubMed

    Chen, Haoming; Zhang, Tao; Sun, Ming-Ting; Saxena, Ankur; Budagavi, Madhukar

    2016-08-01

    Intra prediction is an important tool in intra-frame video coding to reduce the spatial redundancy. In current coding standard H.265/high-efficiency video coding (HEVC), a copying-based method based on the boundary (or interpolated boundary) reference pixels is used to predict each pixel in the coding block to remove the spatial redundancy. We find that the conventional copying-based method can be further improved in two cases: 1) the boundary has an inhomogeneous region and 2) the predicted pixel is far away from the boundary that the correlation between the predicted pixel and the reference pixels is relatively weak. This paper performs a theoretical analysis of the optimal weights based on a first-order Gaussian Markov model and the effects when the pixel values deviate from the model and the predicted pixel is far away from the reference pixels. It also proposes a novel intra prediction scheme based on the analysis that smoothing the copying-based prediction can derive a better prediction block. Both the theoretical analysis and the experimental results show the effectiveness of the proposed intra prediction method. An average gain of 2.3% on all intra coding can be achieved with the HEVC reference software. PMID:27249831

  2. Video coding for next-generation surveillance systems

    NASA Astrophysics Data System (ADS)

    Klasen, Lena M.; Fahlander, Olov

    1997-02-01

    Video is used as recording media in surveillance system and also more frequently by the Swedish Police Force. Methods for analyzing video using an image processing system have recently been introduced at the Swedish National Laboratory of Forensic Science, and new methods are in focus in a research project at Linkoping University, Image Coding Group. The accuracy of the result of those forensic investigations often depends on the quality of the video recordings, and one of the major problems when analyzing videos from crime scenes is the poor quality of the recordings. Enhancing poor image quality might add manipulative or subjective effects and does not seem to be the right way of getting reliable analysis results. The surveillance system in use today is mainly based on video techniques, VHS or S-VHS, and the weakest link is the video cassette recorder, (VCR). Multiplexers for selecting one of many camera outputs for recording is another problem as it often filters the video signal, and recording is limited to only one of the available cameras connected to the VCR. A way to get around the problem of poor recording is to simultaneously record all camera outputs digitally. It is also very important to build such a system bearing in mind that image processing analysis methods becomes more important as a complement to the human eye. Using one or more cameras gives a large amount of data, and the need for data compression is more than obvious. Crime scenes often involve persons or moving objects, and the available coding techniques are more or less useful. Our goal is to propose a possible system, being the best compromise with respect to what needs to be recorded, movements in the recorded scene, loss of information and resolution etc., to secure the efficient recording of the crime and enable forensic analysis. The preventative effective of having a well functioning surveillance system and well established image analysis methods is not to be neglected. Aspects of

  3. Enhanced view random access ability for multiview video coding

    NASA Astrophysics Data System (ADS)

    Elmesloul Nasri, Seif Allah; Khelil, Khaled; Doghmane, Noureddine

    2016-03-01

    Apart from the efficient compression, reducing the complexity of the view random access is one of the most important requirements that should be considered in multiview video coding. In order to obtain an efficient compression, both temporal and inter-view correlations are exploited in the multiview video coding schemes, introducing higher complexity in the temporal and view random access. We propose an inter-view prediction structure that aims to lower the cost of randomly accessing any picture at any position and instant, with respect to the multiview reference model JMVM and other recent relevant works. The proposed scheme is mainly based on the use of two base views (I-views) in the structure with selected positions instead of a single reference view as in the standard structures. This will, therefore, provide a direct inter-view prediction for all the remaining views and will ensure a low-delay view random access ability while maintaining a very competitive bit-rate performance with a similar video quality measured in peak signal-to-noise ratio. In addition to a new evaluation method of the random access ability, the obtained results show a significant improvement in the view random accessibility with respect to other reported works.

  4. Block-based embedded color image and video coding

    NASA Astrophysics Data System (ADS)

    Nagaraj, Nithin; Pearlman, William A.; Islam, Asad

    2004-01-01

    Set Partitioned Embedded bloCK coder (SPECK) has been found to perform comparable to the best-known still grayscale image coders like EZW, SPIHT, JPEG2000 etc. In this paper, we first propose Color-SPECK (CSPECK), a natural extension of SPECK to handle color still images in the YUV 4:2:0 format. Extensions to other YUV formats are also possible. PSNR results indicate that CSPECK is among the best known color coders while the perceptual quality of reconstruction is superior than SPIHT and JPEG2000. We then propose a moving picture based coding system called Motion-SPECK with CSPECK as the core algorithm in an intra-based setting. Specifically, we demonstrate two modes of operation of Motion-SPECK, namely the constant-rate mode where every frame is coded at the same bit-rate and the constant-distortion mode, where we ensure the same quality for each frame. Results on well-known CIF sequences indicate that Motion-SPECK performs comparable to Motion-JPEG2000 while the visual quality of the sequence is in general superior. Both CSPECK and Motion-SPECK automatically inherit all the desirable features of SPECK such as embeddedness, low computational complexity, highly efficient performance, fast decoding and low dynamic memory requirements. The intended applications of Motion-SPECK would be high-end and emerging video applications such as High Quality Digital Video Recording System, Internet Video, Medical Imaging etc.

  5. Complexity control for high-efficiency video coding by coding layers complexity allocations

    NASA Astrophysics Data System (ADS)

    Fang, Jiunn-Tsair; Liang, Kai-Wen; Chen, Zong-Yi; Hsieh, Wei; Chang, Pao-Chi

    2016-03-01

    The latest video compression standard, high-efficiency video coding (HEVC), provides quad-tree structures of coding units (CUs) and four coding tree depths to facilitate coding efficiency. The HEVC encoder considerably increases the computational complexity to levels inappropriate for video applications of power-constrained devices. This work, therefore, proposes a complexity control method for the low-delay P-frame configuration of the HEVC encoder. The complexity control mechanism is among the group of pictures layer, frame layer, and CU layer, and each coding layer provides a distinct method for complexity allocation. Furthermore, the steps in the prediction unit encoding procedure are reordered. By allocating the complexity to each coding layer of HEVC, the proposed method can simultaneously satisfy the entire complexity constraint (ECC) for entire sequence encoding and the instant complexity constraint (ICC) for each frame during real-time encoding. Experimental results showed that as the target complexity under both the ECC and ICC was reduced to 80% and 60%, respectively, the decrease in the average Bjøntegaard delta peak signal-to-noise ratio was ˜0.1 dB with an increase of 1.9% in the Bjøntegaard delta rate, and the complexity control error was ˜4.3% under the ECC and 4.3% under the ICC.

  6. The H.264/MPEG-4 AVC video coding standard and its deployment status

    NASA Astrophysics Data System (ADS)

    Sullivan, Gary J.

    2005-07-01

    The new video coding standard known as H.264/MPEG-4 Advanced Video Coding (AVC), now in its fourth version, has demonstrated significant achievements in terms of coding efficiency, robustness to a variety of network channels and conditions, and breadth of application. The recent fidelity range extensions have further improved compression quality and further broadened the range of applications, and the recent corrigenda have excised the inevitable errata of the initially-approved versions of the specification. Patent licensing programs have begun, the standard has been adopted into a variety of application specifications, and products suitable for widespread deployment have begun to appear. New work toward the near-term development of scalable video coding (SVC) extensions is also under way. This paper does not attempt to review the details of the H.264/MPEG-4 AVC technical design, as that subject has been covered already in a number of publications. Instead, it covers only the high-level design characteristics and focuses more on the recent developments in the standardization community and the deployment status of the specification.

  7. Fast coding unit selection method for high efficiency video coding intra prediction

    NASA Astrophysics Data System (ADS)

    Xiong, Jian

    2013-07-01

    The high efficiency video coding (HEVC) video coding standard under development can achieve higher compression performance than previous standards, such as MPEG-4, H.263, and H.264/AVC. To improve coding performance, a quad-tree coding structure and a robust rate-distortion (RD) optimization technique is used to select an optimum coding mode. Since the RD costs of all possible coding modes are computed to decide an optimum mode, high computational complexity is induced in the encoder. A fast learning-based coding unit (CU) size selection method is presented for HEVC intra prediction. The proposed algorithm is based on theoretical analysis that shows the non-normalized histogram of oriented gradient (n-HOG) can be used to help select CU size. A codebook is constructed offline by clustering n-HOGs of training sequences for each CU size. The optimum size is determined by comparing the n-HOG of the current CU with the learned codebooks. Experimental results show that the CU size selection scheme speeds up intra coding significantly with negligible loss of peak signal-to-noise ratio.

  8. Advanced Video Analysis Needs for Human Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Campbell, Paul D.

    1994-01-01

    Evaluators of human task performance in space missions make use of video as a primary source of data. Extraction of relevant human performance information from video is often a labor-intensive process requiring a large amount of time on the part of the evaluator. Based on the experiences of several human performance evaluators, needs were defined for advanced tools which could aid in the analysis of video data from space missions. Such tools should increase the efficiency with which useful information is retrieved from large quantities of raw video. They should also provide the evaluator with new analytical functions which are not present in currently used methods. Video analysis tools based on the needs defined by this study would also have uses in U.S. industry and education. Evaluation of human performance from video data can be a valuable technique in many industrial and institutional settings where humans are involved in operational systems and processes.

  9. Dependent video coding using a tree representation of pixel dependencies

    NASA Astrophysics Data System (ADS)

    Amati, Luca; Valenzise, Giuseppe; Ortega, Antonio; Tubaro, Stefano

    2011-09-01

    Motion-compensated prediction induces a chain of coding dependencies between pixels in video. In principle, an optimal selection of encoding parameters (motion vectors, quantization parameters, coding modes) should take into account the whole temporal horizon of a GOP. However, in practical coding schemes, these choices are made on a frame-by-frame basis, thus with a possible loss of performance. In this paper we describe a tree-based model for pixelwise coding dependencies: each pixel in a frame is the child of a pixel in a previous reference frame. We show that some tree structures are more favorable than others from a rate-distortion perspective, e.g., because they entail a large descendance of pixels which are well predicted from a common ancestor. In those cases, a higher quality has to be assigned to pixels at the top of such trees. We promote the creation of these structures by adding a special discount term to the conventional Lagrangian cost adopted at the encoder. The proposed model can be implemented through a double-pass encoding procedure. Specifically, we devise heuristic cost functions to drive the selection of quantization parameters and of motion vectors, which can be readily implemented into a state-of-the-art H.264/AVC encoder. Our experiments demonstrate that coding efficiency is improved for video sequences with low motion, while there are no apparent gains for more complex motion. We argue that this is due to both the presence of complex encoder features not captured by the model, and to the complexity of the source to be encoded.

  10. Fast motion prediction algorithm for multiview video coding

    NASA Astrophysics Data System (ADS)

    Abdelazim, Abdelrahman; Zhang, Guang Y.; Mein, Stephen J.; Varley, Martin R.; Ait-Boudaoud, Djamel

    2011-06-01

    Multiview Video Coding (MVC) is an extension to the H.264/MPEG-4 AVC video compression standard developed with joint efforts by MPEG/VCEG to enable efficient encoding of sequences captured simultaneously from multiple cameras using a single video stream. Therefore the design is aimed at exploiting inter-view dependencies in addition to reducing temporal redundancies. However, this further increases the overall encoding complexity In this paper, the high correlation between a macroblock and its enclosed partitions is utilised to estimate motion homogeneity, and based on the result inter-view prediction is selectively enabled or disabled. Moreover, if the MVC is divided into three layers in terms of motion prediction; the first being the full and sub-pixel motion search, the second being the mode selection process and the third being repetition of the first and second for inter-view prediction, the proposed algorithm significantly reduces the complexity in the three layers. To assess the proposed algorithm, a comprehensive set of experiments were conducted. The results show that the proposed algorithm significantly reduces the motion estimation time whilst maintaining similar Rate Distortion performance, when compared to both the H.264/MVC reference software and recently reported work.

  11. An Adaptive Motion Estimation Scheme for Video Coding

    PubMed Central

    Gao, Yuan; Jia, Kebin

    2014-01-01

    The unsymmetrical-cross multihexagon-grid search (UMHexagonS) is one of the best fast Motion Estimation (ME) algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV) distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised. PMID:24672313

  12. Orbital Express Advanced Video Guidance Sensor

    NASA Technical Reports Server (NTRS)

    Howard, Ricky; Heaton, Andy; Pinson, Robin; Carrington, Connie

    2008-01-01

    In May 2007 the first US fully autonomous rendezvous and capture was successfully performed by DARPA's Orbital Express (OE) mission. Since then, the Boeing ASTRO spacecraft and the Ball Aerospace NEXTSat have performed multiple rendezvous and docking maneuvers to demonstrate the technologies needed for satellite servicing. MSFC's Advanced Video Guidance Sensor (AVGS) is a primary near-field proximity operations sensor integrated into ASTRO's Autonomous Rendezvous and Capture Sensor System (ARCSS), which provides relative state knowledge to the ASTRO GN&C system. This paper provides an overview of the AVGS sensor flying on Orbital Express, and a summary of the ground testing and on-orbit performance of the AVGS for OE. The AVGS is a laser-based system that is capable of providing range and bearing at midrange distances and full six degree-of-freedom (6DOF) knowledge at near fields. The sensor fires lasers at two different frequencies to illuminate the Long Range Targets (LRTs) and the Short Range Targets (SRTs) on NEXTSat. Subtraction of one image from the other image removes extraneous light sources and reflections from anything other than the corner cubes on the LRTs and SRTs. This feature has played a significant role for Orbital Express in poor lighting conditions. The very bright spots that remain in the subtracted image are processed by the target recognition algorithms and the inverse-perspective algorithms, to provide 3DOF or 6DOF relative state information. Although Orbital Express has configured the ASTRO ARCSS system to only use AVGS at ranges of 120 m or less, some OE scenarios have provided opportunities for AVGS to acquire and track NEXTSat at greater distances. Orbital Express scenarios to date that have utilized AVGS include a berthing operation performed by the ASTRO robotic arm, sensor checkout maneuvers performed by the ASTRO robotic arm, 10-m unmated operations, 30-m unmated operations, and Scenario 3-1 anomaly recovery. The AVGS performed very

  13. Recent Advances in Video Meteor Photometry

    NASA Technical Reports Server (NTRS)

    Swift, Wesley R.; Suggs, Robert M.; Meachem, Terry; Cooke, William J.

    2003-01-01

    One of the most common (and obvious) problems with video meteor data involves the saturation of the output signal produced by bright meteors, resulting in the elimination of such meteors from photometric determinations. It is important to realize that a "bright" meteor recorded by intensified meteor camera is not what would be considered "bright" by a visual observer - indeed, many Generation II or III camera systems are saturated by meteors with a visual magnitude of 3, barely even noticeable to the untrained eye. As the relatively small fields of view (approx.30 ) of the camera systems captures at best modest numbers of meteors, even during storm peaks, the loss of meteors brighter than +3 renders the determination of shower population indices from video observations even more difficult. Considerable effort has been devoted by the authors to the study of the meteor camera systems employed during the Marshall Space Flight Center s Leonid ground-based campaigns, and a calibration scheme has been devised which can extend the useful dynamic range of such systems by approximately 4 magnitudes. The calibration setup involves only simple equipment, available to amateur and professional, and it is hoped that use of this technique will make for better meteor photometry, and move video meteor analysis beyond the realm of simple counts.

  14. 3D video coding: an overview of present and upcoming standards

    NASA Astrophysics Data System (ADS)

    Merkle, Philipp; Müller, Karsten; Wiegand, Thomas

    2010-07-01

    An overview of existing and upcoming 3D video coding standards is given. Various different 3D video formats are available, each with individual pros and cons. The 3D video formats can be separated into two classes: video-only formats (such as stereo and multiview video) and depth-enhanced formats (such as video plus depth and multiview video plus depth). Since all these formats exist of at least two video sequences and possibly additional depth data, efficient compression is essential for the success of 3D video applications and technologies. For the video-only formats the H.264 family of coding standards already provides efficient and widely established compression algorithms: H.264/AVC simulcast, H.264/AVC stereo SEI message, and H.264/MVC. For the depth-enhanced formats standardized coding algorithms are currently being developed. New and specially adapted coding approaches are necessary, as the depth or disparity information included in these formats has significantly different characteristics than video and is not displayed directly, but used for rendering. Motivated by evolving market needs, MPEG has started an activity to develop a generic 3D video standard within the 3DVC ad-hoc group. Key features of the standard are efficient and flexible compression of depth-enhanced 3D video representations and decoupling of content creation and display requirements.

  15. Selective video encryption of a distributed coded bitstream using LDPC codes

    NASA Astrophysics Data System (ADS)

    Um, Hwayoung; Delp, Edward J.

    2006-02-01

    Selective encryption is a technique that is used to minimizec omputational complexity or enable system functionality by only encrypting a portion of a compressed bitstream while still achieving reasonable security. For selective encryption to work, we need to rely not only on the beneficial effects of redundancy reduction, but also on the characteristics of the compression algorithm to concentrate important data representing the source in a relatively small fraction of the compressed bitstream. These important elements of the compressed data become candidates for selective encryption. In this paper, we combine encryption and distributed video source coding to consider the choices of which types of bits are most effective for selective encryption of a video sequence that has been compressed using a distributed source coding method based on LDPC codes. Instead of encrypting the entire video stream bit by bit, we encrypt only the highly sensitive bits. By combining the compression and encryption tasks and thus reducing the number of bits encrypted, we can achieve a reduction in system complexity.

  16. Improved video coding efficiency exploiting tree-based pixelwise coding dependencies

    NASA Astrophysics Data System (ADS)

    Valenzise, Giuseppe; Ortega, Antonio

    2010-01-01

    In a conventional hybrid video coding scheme, the choice of encoding parameters (motion vectors, quantization parameters, etc.) is carried out by optimizing frame by frame the output distortion for a given rate budget. While it is well known that motion estimation naturally induces a chain of dependencies among pixels, this is usually not explicitly exploited in the coding process in order to improve overall coding efficiency. Specifically, when considering a group of pictures with an IPPP... structure, each pixel of the first frame can be thought of as the root of a tree whose children are the pixels of the subsequent frames predicted by it. In this work, we demonstrate the advantages of such a representation by showing that, in some situations, the best motion vector is not the one that minimizes the energy of the prediction residual, but the one that produces a better tree structure, e.g., one that can be globally more favorable from a rate-distortion perspective. In this new structure, pixel with a larger descendance are allocated extra rate to produce higher quality predictors. As a proof of concept, we verify this assertion by assigning the quantization parameter in a video sequence in such a way that pixels with a larger number of descendants are coded with a higher quality. In this way we are able to improve RD performance by nearly 1 dB. Our preliminary results suggest that a deeper understanding of the temporal dependencies can potentially lead to substantial gains in coding performance.

  17. Quantization table design revisited for image/video coding.

    PubMed

    Yang, En-Hui; Sun, Chang; Meng, Jin

    2014-11-01

    Quantization table design is revisited for image/video coding where soft decision quantization (SDQ) is considered. Unlike conventional approaches, where quantization table design is bundled with a specific encoding method, we assume optimal SDQ encoding and design a quantization table for the purpose of reconstruction. Under this assumption, we model transform coefficients across different frequencies as independently distributed random sources and apply the Shannon lower bound to approximate the rate distortion function of each source. We then show that a quantization table can be optimized in a way that the resulting distortion complies with certain behavior. Guided by this new design principle, we propose an efficient statistical-model-based algorithm using the Laplacian model to design quantization tables for DCT-based image coding. When applied to standard JPEG encoding, it provides more than 1.5-dB performance gain in PSNR, with almost no extra burden on complexity. Compared with the state-of-the-art JPEG quantization table optimizer, the proposed algorithm offers an average 0.5-dB gain in PSNR with computational complexity reduced by a factor of more than 2000 when SDQ is OFF, and a 0.2-dB performance gain or more with 85% of the complexity reduced when SDQ is ON. Significant compression performance improvement is also seen when the algorithm is applied to other image coding systems proposed in the literature. PMID:25248184

  18. Zebra: An advanced PWR lattice code

    SciTech Connect

    Cao, L.; Wu, H.; Zheng, Y.

    2012-07-01

    This paper presents an overview of an advanced PWR lattice code ZEBRA developed at NECP laboratory in Xi'an Jiaotong Univ.. The multi-group cross-section library is generated from the ENDF/B-VII library by NJOY and the 361-group SHEM structure is employed. The resonance calculation module is developed based on sub-group method. The transport solver is Auto-MOC code, which is a self-developed code based on the Method of Characteristic and the customization of AutoCAD software. The whole code is well organized in a modular software structure. Some numerical results during the validation of the code demonstrate that this code has a good precision and a high efficiency. (authors)

  19. Self-derivation of motion estimation techniques to improve video coding efficiency

    NASA Astrophysics Data System (ADS)

    Chiu, Yi-jen; Xu, Lidong; Zhang, Wenhao; Jiang, Hong

    2010-08-01

    This paper presents the techniques to self derive the motion vectors (MVs) at video decoder side to improve coding efficiency of B pictures. With the MVs information self derived at video decoder side, the transmission of these self-derived MVs from video encoder side to video decoder side is skipped and thus better coding efficiency can be achieved. Our proposed techniques derive the block-based MVs at video decoder side by considering the temporal correlation among the available pixels in the previously-decoded reference pictures. Utilizing the MVs derived at video decoder side can be added as one of coding mode candidates from video encoder where the video encoder can utilize this new coding mode during phase of the coding mode selection to better trade off the rate-distortion performance to improve the coding efficiency. Experiments have demonstrated that the BD bitrate improvement on top of ITU-T/VCEG Key Technology Area (KTA) Reference Software platform with an overall about 7% improvement on the hierarchical IbBbBbBbP coding structure under the common test conditions of the joint call for proposal for the new video coding technology from ISO/MPEG and ITU-T committee on January 2010.

  20. Adaptive distributed video coding with correlation estimation using expectation propagation

    NASA Astrophysics Data System (ADS)

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  1. Video Fact Sheets: Everyday Advanced Materials

    SciTech Connect

    2015-10-06

    What are Advanced Materials? Ames Laboratory is behind some of the best advanced materials out there. Some of those include: Lead-Free Solder, Photonic Band-Gap Crystals, Terfenol-D, Aluminum-Calcium Power Cable and Nano Particles. Some of these are in products we use every day.

  2. Video coding and transmission standards for 3D television — a survey

    NASA Astrophysics Data System (ADS)

    Buchowicz, A.

    2013-03-01

    The emerging 3D television systems require effective techniques for transmission and storage of data representing a 3-D scene. The 3-D scene representations based on multiple video sequences or multiple views plus depth maps are especially important since they can be processed with existing video technologies. The review of the video coding and transmission techniques is presented in this paper.

  3. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    PubMed

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862

  4. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    PubMed Central

    Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862

  5. Augmenting Advance Care Planning in Poor Prognosis Cancer with a Video Decision Aid: A Pre-Post Study

    PubMed Central

    Volandes, Angelo E.; Levin, Tomer T.; Slovin, Susan; Carvajal, Richard D.; O’Reilly, Eileen M.; Keohan, Mary Louise; Theodoulou, Maria; Dickler, Maura; Gerecitano, John F.; Morris, Michael; Epstein, Andrew S.; Naka-Blackstone, Anastazia; Walker-Corkery, Elizabeth S.; Chang, Yuchiao; Noy, Ariela

    2012-01-01

    Background We tested whether an educational video on the goals of care in advanced cancer (life-prolonging, basic or comfort care) can help patients understand these goals and impact preferences for resuscitation. Methods Survey of 80 advanced cancer patients before and after viewing the video. Outcomes included changes in goals-of-care preference and knowledge, and consistency of preferences with code status. Results Before viewing the video, 10 patients (13%) preferred life-prolonging care; 24 (30%) basic care; and 29 (36%) comfort care; 17 (21%) were unsure. Preferences did not change after the video: 9 (11%) chose life-prolonging care; 28 (35%) basic care; 29 (36%) comfort care; and, 14 (18%) were unsure (p=0.28). Compared to baseline, after the video presentation more patients did not want CPR (71 vs 61%, p=0.03) or ventilation (79 vs 67%, p=0.008). Knowledge about goals of care and likelihood of resuscitation increased post-video (p<.001). Of the patients who did not want CPR or ventilation after the video augmentation, only 4 (5%) had a documented DNR order in the medical record (kappa statistic −0.01; 95% CI −0.06 – 0.04). Acceptability of the video was high. Conclusion Patients with advanced cancer did not change care preferences after viewing the video, but fewer wanted CPR or ventilation. Documented code status was inconsistent with patient preferences. Patients were more knowledgeable after the video, found the video acceptable, and would recommend it to others. Video may enable visualization of “goals of care,” enriching patient understanding of worsening health states and better informing decision-making. PMID:22252775

  6. Spatio-temporal correlation-based fast coding unit depth decision for high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Zhou, Chengtao; Zhou, Fan; Chen, Yaowu

    2013-10-01

    The exhaustive block partition search process in high efficiency video coding (HEVC) imposes a very high computational complexity on test module of HEVC encoder (HM). A fast coding unit (CU) depth algorithm using the spatio-temporal correlation of the depth information to fasten the search process is proposed. The depth of the coding tree unit (CTU) is predicted first by using the depth information of the spatio-temporal neighbor CTUs. Then, the depth information of the adjacent CU is incorporated to skip some specific depths when encoding the sub-CTU. As compared with the original HM encoder, experimental results show that the proposed algorithm can save more than 20% encoding time on average for intra-only, low-delay, low-delay P slices, and random access cases with almost the same rate-distortion performance.

  7. Theoretical study of use of optical orthogonal codes for compressed video transmission in optical code division multiple access (OCDMA) system

    NASA Astrophysics Data System (ADS)

    Ghosh, Shila; Chatterji, B. N.

    2007-09-01

    A theoretical investigation to evaluate the performance of optical code division multiple access (OCDMA) for compressed video transmission is shown. OCDMA has many advantages than a typical synchronous protocol time division multiple access (TDMA). A pulsed laser transmission of multi channel digital video can be done using various techniques depending on whether the multi channel data are to be synchronous or asynchronous. A typical form of asynchronous digital operation is wavelength division multiplexing (WDM) in which the digital data of each video source are assigned a specific and separate wavelength. A sophisticated hardware such as accurate wavelength control of all lasers and tunable narrow band optical filters at the receivers is required in this case. A major disadvantage with CDMA is the reduction in per channel data rate (relative to the speeds available in the laser itself) that occurs in the insertion of code addressing. Hence optical CDMA for the video transmission application is meaningful when individual channel video bit rates can be significantly reduced and that can be done by compression of video data. In our work for compression of video image standard JPEG is implemented where a compression ratio of about 60 % is obtained without noticeable image degradation. Compared to the other existing techniques JPEG standard achieves higher compression ration with high S/N ratio. Here we demonstrated the auto and cross correlation properties of the codes. We have shown the implementation of bipolar Walsh coding in OCDMA system and their use in transmission of image/video.

  8. Design and implementation of H.264 based embedded video coding technology

    NASA Astrophysics Data System (ADS)

    Mao, Jian; Liu, Jinming; Zhang, Jiemin

    2016-03-01

    In this paper, an embedded system for remote online video monitoring was designed and developed to capture and record the real-time circumstances in elevator. For the purpose of improving the efficiency of video acquisition and processing, the system selected Samsung S5PV210 chip as the core processor which Integrated graphics processing unit. And the video was encoded with H.264 format for storage and transmission efficiently. Based on S5PV210 chip, the hardware video coding technology was researched, which was more efficient than software coding. After running test, it had been proved that the hardware video coding technology could obviously reduce the cost of system and obtain the more smooth video display. It can be widely applied for the security supervision [1].

  9. A unified framework of unsupervised subjective optimized bit allocation for multiple video object coding

    NASA Astrophysics Data System (ADS)

    Chen, Zhenzhong; Han, Junwei; Ngan, King Ngi

    2005-10-01

    MPEG-4 treats a scene as a composition of several objects or so-called video object planes (VOPs) that are separately encoded and decoded. Such a flexible video coding framework makes it possible to code different video object with different distortion scale. It is necessary to analyze the priority of the video objects according to its semantic importance, intrinsic properties and psycho-visual characteristics such that the bit budget can be distributed properly to video objects to improve the perceptual quality of the compressed video. This paper aims to provide an automatic video object priority definition method based on object-level visual attention model and further propose an optimization framework for video object bit allocation. One significant contribution of this work is that the human visual system characteristics are incorporated into the video coding optimization process. Another advantage is that the priority of the video object can be obtained automatically instead of fixing weighting factors before encoding or relying on the user interactivity. To evaluate the performance of the proposed approach, we compare it with traditional verification model bit allocation and the optimal multiple video object bit allocation algorithms. Comparing with traditional bit allocation algorithms, the objective quality of the object with higher priority is significantly improved under this framework. These results demonstrate the usefulness of this unsupervised subjective quality lifting framework.

  10. Advanced Modulation and Coding Technology Conference

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The objectives, approach, and status of all current LeRC-sponsored industry contracts and university grants are presented. The following topics are covered: (1) the LeRC Space Communications Program, and Advanced Modulation and Coding Projects; (2) the status of four contracts for development of proof-of-concept modems; (3) modulation and coding work done under three university grants, two small business innovation research contracts, and two demonstration model hardware development contracts; and (4) technology needs and opportunities for future missions.

  11. Context-adaptive binary arithmetic coding with precise probability estimation and complexity scalability for high-efficiency video coding

    NASA Astrophysics Data System (ADS)

    Karwowski, Damian; Domański, Marek

    2016-01-01

    An improved context-based adaptive binary arithmetic coding (CABAC) is presented. The idea for the improvement is to use a more accurate mechanism for estimation of symbol probabilities in the standard CABAC algorithm. The authors' proposal of such a mechanism is based on the context-tree weighting technique. In the framework of a high-efficiency video coding (HEVC) video encoder, the improved CABAC allows 0.7% to 4.5% bitrate saving compared to the original CABAC algorithm. The application of the proposed algorithm marginally affects the complexity of HEVC video encoder, but the complexity of video decoder increases by 32% to 38%. In order to decrease the complexity of video decoding, a new tool has been proposed for the improved CABAC that enables scaling of the decoder complexity. Experiments show that this tool gives 5% to 7.5% reduction of the decoding time while still maintaining high efficiency in the data compression.

  12. Comparison of Video Coding Methods to Pay Attention in Anchoring Effect

    NASA Astrophysics Data System (ADS)

    Imaizumi, Keisuke; Sugiura, Akihiko

    In this study, we propose using the anchoring effect that is one of the cognitive bias as a new approach of the encoding. And we suggest technique to apply to encoding. As a result of experiments, we found that displaying High-definition image in the early part of video effects look clear than original video. In addition we noticed that the anchoring effect appear remarkably in a low rate video coding. And if changes the video rate is smoothly, the anchoring effect is shown clearness in a high average rate video.

  13. Next Generation Advanced Video Guidance Sensor Development and Test

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Bryan, Thomas C.; Lee, Jimmy; Robertson, Bryan

    2009-01-01

    The Advanced Video Guidance Sensor (AVGS) was the primary docking sensor for the Orbital Express mission. The sensor performed extremely well during the mission, and the technology has been proven on orbit in other flights too. Parts obsolescence issues prevented the construction of more AVGS units, so the next generation of sensor was designed with current parts and updated to support future programs. The Next Generation Advanced Video Guidance Sensor (NGAVGS) has been tested as a breadboard, two different brassboard units, and a prototype. The testing revealed further improvements that could be made and demonstrated capability beyond that ever demonstrated by the sensor on orbit. This paper presents some of the sensor history, parts obsolescence issues, radiation concerns, and software improvements to the NGAVGS. In addition, some of the testing and test results are presented. The NGAVGS has shown that it will meet the general requirements for any space proximity operations or docking need.

  14. DCT/DST-based transform coding for intra prediction in image/video coding.

    PubMed

    Saxena, Ankur; Fernandes, Felix C

    2013-10-01

    In this paper, we present a DCT/DST based transform scheme that applies either the conventional DCT or type-7 DST for all the video-coding intra-prediction modes: vertical, horizontal, and oblique. Our approach is applicable to any block-based intra prediction scheme in a codec that employs transforms along the horizontal and vertical direction separably. Previously, Han, Saxena, and Rose showed that for the intra-predicted residuals of horizontal and vertical modes, the DST is the optimal transform with performance close to the KLT. Here, we prove that this is indeed the case for the other oblique modes. The optimal choice of using DCT or DST is based on intra-prediction modes and requires no additional signaling information or rate-distortion search. The DCT/DST scheme presented in this paper was adopted in the HEVC standardization in March 2011. Further simplifications, especially to reduce implementation complexity, which remove the mode-dependency between DCT and DST, and simply always use DST for the 4 × 4 intra luma blocks, were adopted in the HEVC standard in July 2012. Simulation results conducted for the DCT/DST algorithm are shown in the reference software for the ongoing HEVC standardization. Our results show that the DCT/DST scheme provides significant BD-rate improvement over the conventional DCT based scheme for intra prediction in video sequences. PMID:23744679

  15. Introduction to study and simulation of low rate video coding schemes

    NASA Technical Reports Server (NTRS)

    1992-01-01

    During this period, the development of simulators for the various HDTV systems proposed to the FCC were developed. These simulators will be tested using test sequences from the MPEG committee. The results will be extrapolated to HDTV video sequences. Currently, the simulator for the compression aspects of the Advanced Digital Television (ADTV) was completed. Other HDTV proposals are at various stages of development. A brief overview of the ADTV system is given. Some coding results obtained using the simulator are discussed. These results are compared to those obtained using the CCITT H.261 standard. These results in the context of the CCSDS specifications are evaluated and some suggestions as to how the ADTV system could be implemented in the NASA network are made.

  16. Semi-fixed-length motion vector coding for H.263-based low bit rate video compression.

    PubMed

    Côté, G; Gallant, M; Kossentini, F

    1999-01-01

    We present a semi-fixed-length motion vector coding method for H.263-based low bit rate video compression. The method exploits structural constraints within the motion field. The motion vectors are encoded using semi-fixed-length codes, yielding essentially the same levels of rate-distortion performance and subjective quality achieved by H.263's Huffman-based variable length codes in a noiseless environment. However, such codes provide substantially higher error resilience in a noisy environment. PMID:18267417

  17. Tradeoff between picture resolution and quantization precision in video coding for embedded systems

    NASA Astrophysics Data System (ADS)

    Yuan, Yu; Feng, David; Zhong, Yuzhuo

    2004-01-01

    In embedded multimedia applications, improving video quality under constraints of bandwidth and storage is an important problem. In this paper, we discuss the relationship among picture resolution, quantization precision and subjective quality in video coding for embedded systems. Then we propose a principle of tradeoff between picture resolution and quantization precision. Video coding based on the tradeoff principle can achieve higher subjective quality at low bitrates, and significantly reduce the burden of decoders. Experimental results on both MPEG-2 codec and H.264 codec prove that the tradeoff principle is valuable and feasible for embedded systems.

  18. Variable disparity-motion estimation based fast three-view video coding

    NASA Astrophysics Data System (ADS)

    Bae, Kyung-Hoon; Kim, Seung-Cheol; Hwang, Yong Seok; Kim, Eun-Soo

    2009-02-01

    In this paper, variable disparity-motion estimation (VDME) based 3-view video coding is proposed. In the encoding, key-frame coding (KFC) based motion estimation and variable disparity estimation (VDE) for effectively fast three-view video encoding are processed. These proposed algorithms enhance the performance of 3-D video encoding/decoding system in terms of accuracy of disparity estimation and computational overhead. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm's PSNRs is 37.66 and 40.55 dB, and the processing time is 0.139 and 0.124 sec/frame, respectively.

  19. Error-resilient multiple description video coding for wireless transmission over multiple iridium channels

    NASA Astrophysics Data System (ADS)

    Tyldesley, Katherine S.; Abousleman, Glen P.; Karam, Lina J.

    2003-08-01

    This paper presents an error-resilient wavelet-based multiple description video coding scheme for the transmission of video over wireless channels. The proposed video coding scheme has been implemented and successfully tested over the wireless Iridium satellite communication network. As a test bed for the develope dcodec, we also present an inverse multiplexing unit that simultaneously combines several Iridium channels to form an effective higher-rate channel, where the total bandwidth is directly proportional to the number of channels combined. The developed unit can be integrated into a variety of systems such as ISR sensors, aircraft, vehicles, ships, and end user terminals (EUTs), or can operate as a standalone device. The recombination of the multi-channel unit with our proposed multi-channel video codec facilitates global and on-the-move video communications without reliance on any terrestrial or airborne infrastructure whatsoever.

  20. Low-Complexity Saliency Detection Algorithm for Fast Perceptual Video Coding

    PubMed Central

    Liu, Pengyu; Jia, Kebin

    2013-01-01

    A low-complexity saliency detection algorithm for perceptual video coding is proposed; low-level encoding information is adopted as the characteristics of visual perception analysis. Firstly, this algorithm employs motion vector (MV) to extract temporal saliency region through fast MV noise filtering and translational MV checking procedure. Secondly, spatial saliency region is detected based on optimal prediction mode distributions in I-frame and P-frame. Then, it combines the spatiotemporal saliency detection results to define the video region of interest (VROI). The simulation results validate that the proposed algorithm can avoid a large amount of computation work in the visual perception characteristics analysis processing compared with other existing algorithms; it also has better performance in saliency detection for videos and can realize fast saliency detection. It can be used as a part of the video standard codec at medium-to-low bit-rates or combined with other algorithms in fast video coding. PMID:24489495

  1. Instantly decodable network coding for real-time scalable video broadcast over wireless networks

    NASA Astrophysics Data System (ADS)

    Karim, Mohammad S.; Sadeghi, Parastoo; Sorour, Sameh; Aboutorab, Neda

    2016-01-01

    In this paper, we study real-time scalable video broadcast over wireless networks using instantly decodable network coding (IDNC). Such real-time scalable videos have hard deadline and impose a decoding order on the video layers. We first derive the upper bound on the probability that the individual completion times of all receivers meet the deadline. Using this probability, we design two prioritized IDNC algorithms, namely the expanding window IDNC (EW-IDNC) algorithm and the non-overlapping window IDNC (NOW-IDNC) algorithm. These algorithms provide a high level of protection to the most important video layer, namely the base layer, before considering additional video layers, namely the enhancement layers, in coding decisions. Moreover, in these algorithms, we select an appropriate packet combination over a given number of video layers so that these video layers are decoded by the maximum number of receivers before the deadline. We formulate this packet selection problem as a two-stage maximal clique selection problem over an IDNC graph. Simulation results over a real scalable video sequence show that our proposed EW-IDNC and NOW-IDNC algorithms improve the received video quality compared to the existing IDNC algorithms.

  2. Sliding-window raptor codes for efficient scalable wireless video broadcasting with unequal loss protection.

    PubMed

    Cataldi, Pasquale; Grangetto, Marco; Tillo, Tammam; Magli, Enrico; Olmo, Gabriella

    2010-06-01

    Digital fountain codes have emerged as a low-complexity alternative to Reed-Solomon codes for erasure correction. The applications of these codes are relevant especially in the field of wireless video, where low encoding and decoding complexity is crucial. In this paper, we introduce a new class of digital fountain codes based on a sliding-window approach applied to Raptor codes. These codes have several properties useful for video applications, and provide better performance than classical digital fountains. Then, we propose an application of sliding-window Raptor codes to wireless video broadcasting using scalable video coding. The rates of the base and enhancement layers, as well as the number of coded packets generated for each layer, are optimized so as to yield the best possible expected quality at the receiver side, and providing unequal loss protection to the different layers according to their importance. The proposed system has been validated in a UMTS broadcast scenario, showing that it improves the end-to-end quality, and is robust towards fluctuations in the packet loss rate. PMID:20215084

  3. A low complexity prioritized bit-plane coding for SNR scalability in MPEG-21 scalable video coding

    NASA Astrophysics Data System (ADS)

    Peng, Wen-Hsiao; Chiang, Tihao; Hang, Hsueh-Ming

    2005-07-01

    In this paper, we propose a low complexity prioritized bit-plane coding scheme to improve the rate-distortion performance of cyclical block coding in MPEG-21 scalable video coding. Specifically, we use a block priority assignment algorithm to firstly transmit the symbols and the blocks with potentially better rate-distortion performance. Different blocks are allowed to be coded unequally in a coding cycle. To avoid transmitting priority overhead, the encoder and the decoder refer to the same context to assign priority. Furthermore, to reduce the complexity, the priority assignment is done by a look-up-table and the coding of each block is controlled by a simple threshold comparison mechanism. Experimental results show that our prioritized bit-plane coding scheme can offer up to 0.5dB PSNR improvement over the cyclical block coding described in the joint scalable verification model (JSVM).

  4. A new coding technique of digital hologram video based on view-point MCTF

    NASA Astrophysics Data System (ADS)

    Seo, Young-Ho; Choi, Hyun-Jun; Yoo, Ji-Sang; Kim, Dong-Wook

    2006-10-01

    In this paper, we proposed a new coding technique of digital hologram video using 3D scanning method and video compression technique. The proposed coding consists of capturing a digital hologram to separate into RGB color space components, localization by segmenting the fringe pattern, frequency transform using M×N (segment size) 2D DCT (2 Dimensional Discrete Cosine Transform) for extracting redundancy, 3D scan of segment to form a video sequence, motion compensated temporal filtering (MCTF) and modified video coding which uses H.264/AVC. The compressed digital hologram was reconstructed by both computer program and optic system. The proposed algorithm showed better properties after reconstruction with higher compression ratios than the previous researches.

  5. Wyner-Ziv video coding based on a new hierarchical block matching algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Rong Ke; Zhao, Hong Bo; Yue, Zhi

    2008-02-01

    Distributed video coding (DVC) is a new video coding paradigm that shifts the complexity from the encoder side to the decoder side. One particular case of DVC, the Wyner-Ziv coding scheme, encodes each video frame separately and decodes the video sequence jointly with side information. This paper presents a new Wyner-Ziv video coding scheme based on hierarchical block matching algorithm (HBMA). In this proposed scheme, the side information is greatly refined to assist the reconstruction of the Wyner-Ziv frames. The bidirectional motion estimation and the forward motion estimation are associated to generate the interpolated frame from temporally adjacent key frames to attain the high fidelity side information. During the bidirectional motion estimation, the size of the block and the search area vary at different levels of hierarchy. In additional, the motion vectors are inherited from big blocks to small blocks by choosing the smallest mean-of-the-absolute-difference value among neighboring blocks. Preliminary experiment results show that the proposed scheme can achieve better rate-distortion performance by 0.5-1 dB compared to the existing Wyner-Ziv video coding with the slightly increased decoding complexity.

  6. Real-time transmission of digital video using variable-length coding

    NASA Technical Reports Server (NTRS)

    Bizon, Thomas P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1993-01-01

    Huffman coding is a variable-length lossless compression technique where data with a high probability of occurrence is represented with short codewords, while 'not-so-likely' data is assigned longer codewords. Compression is achieved when the high-probability levels occur so frequently that their benefit outweighs any penalty paid when a less likely input occurs. One instance where Huffman coding is extremely effective occurs when data is highly predictable and differential coding can be applied (as with a digital video signal). For that reason, it is desirable to apply this compression technique to digital video transmission; however, special care must be taken in order to implement a communication protocol utilizing Huffman coding. This paper addresses several of the issues relating to the real-time transmission of Huffman-coded digital video over a constant-rate serial channel. Topics discussed include data rate conversion (from variable to a fixed rate), efficient data buffering, channel coding, recovery from communication errors, decoder synchronization, and decoder architectures. A description of the hardware developed to execute Huffman coding and serial transmission is also included. Although this paper focuses on matters relating to Huffman-coded digital video, the techniques discussed can easily be generalized for a variety of applications which require transmission of variable-length data.

  7. VTLOGANL: A Computer Program for Coding and Analyzing Data Gathered on Video Tape.

    ERIC Educational Resources Information Center

    Hecht, Jeffrey B.; And Others

    To code and analyze research data on videotape, a methodology is needed that allows the researcher to code directly and then analyze the observed degree of intensity of the observed events. The establishment of such a methodology is the next logical step in the development of the use of video recorded data in research. The Technological…

  8. Real-time transmission of digital video using variable-length coding

    NASA Astrophysics Data System (ADS)

    Bizon, Thomas P.; Shalkhauser, Mary Jo; Whyte, Wayne A., Jr.

    1993-03-01

    Huffman coding is a variable-length lossless compression technique where data with a high probability of occurrence is represented with short codewords, while 'not-so-likely' data is assigned longer codewords. Compression is achieved when the high-probability levels occur so frequently that their benefit outweighs any penalty paid when a less likely input occurs. One instance where Huffman coding is extremely effective occurs when data is highly predictable and differential coding can be applied (as with a digital video signal). For that reason, it is desirable to apply this compression technique to digital video transmission; however, special care must be taken in order to implement a communication protocol utilizing Huffman coding. This paper addresses several of the issues relating to the real-time transmission of Huffman-coded digital video over a constant-rate serial channel. Topics discussed include data rate conversion (from variable to a fixed rate), efficient data buffering, channel coding, recovery from communication errors, decoder synchronization, and decoder architectures. A description of the hardware developed to execute Huffman coding and serial transmission is also included. Although this paper focuses on matters relating to Huffman-coded digital video, the techniques discussed can easily be generalized for a variety of applications which require transmission of variable-length data.

  9. Spatial resampling of IDR frames for low bitrate video coding with HEVC

    NASA Astrophysics Data System (ADS)

    Hosking, Brett; Agrafiotis, Dimitris; Bull, David; Easton, Nick

    2015-03-01

    As the demand for higher quality and higher resolution video increases, many applications fail to meet this demand due to low bandwidth restrictions. One factor contributing to this problem is the high bitrate requirement of the intra-coded Instantaneous Decoding Refresh (IDR) frames featuring in all video coding standards. Frequent coding of IDR frames is essential for error resilience in order to prevent the occurrence of error propagation. However, as each one consumes a huge portion of the available bitrate, the quality of future coded frames is hindered by high levels of compression. This work presents a new technique, known as Spatial Resampling of IDR Frames (SRIF), and shows how it can increase the rate distortion performance by providing a higher and more consistent level of video quality at low bitrates.

  10. Experience with advanced nodal codes at YAEC

    SciTech Connect

    Cacciapouti, R.J.

    1990-01-01

    Yankee Atomic Electric Company (YAEC) has been performing reload licensing analysis since 1969. The basic pressurized water reactor (PWR) methodology involves the use of LEOPARD for cross-section generation, PDQ for radial power distributions and integral control rod worth, and SIMULATE for axial power distributions and differential control rod worth. In 1980, YAEC began performing reload licensing analysis for the Vermont Yankee boiling water reactor (BWR). The basic BWR methodology involves the use of CASMO for cross-section generation and SIMULATE for three-dimensional power distributions. In 1986, YAEC began investigating the use of CASMO-3 for cross-section generation and the advanced nodal code SIMULATE-3 for power distribution analysis. Based on the evaluation, the CASMO-3/SIMULATE-3 methodology satisfied all requirements. After careful consideration, the cost of implementing the new methodology is expected to be offset by reduced computing costs, improved engineering productivity, and fuel-cycle performance gains.

  11. Recent advances in uniportal video-assisted thoracoscopic surgery.

    PubMed

    Gonzalez-Rivas, Diego

    2015-02-01

    Thanks to the recent improvements in video-assisted thoracoscopic techniques (VATS) and anesthetic procedures, a great deal of complex lung resections can be performed avoiding open surgery. The experience gained through VATS techniques, enhancement of the surgical instruments, improvement of high definition cameras and avoidance of intubated general anesthesia have been the greatest advances to minimize the trauma to the patient. Uniportal VATS for major resections has become a revolution in the treatment of lung pathologies since initially described 4 years ago. The huge number of surgical videos posted on specialized websites, live surgery events and experimental courses has contributed to the rapid learning of uniportal major thoracoscopic surgery during the last years. The future of the thoracic surgery is based on evolution of surgical procedures and anesthetic techniques to try to reduce the trauma to the patient. Further development of new technologies probably will focus on sealing devices for all vessels and fissure, refined staplers and instruments, improvements in 3D systems or wireless cameras, and robotic surgery. As thoracoscopic techniques continue to evolve exponentially, we can see the emergence of new approaches in the anesthetical and the perioperative management of these patients. Advances in anesthesia include lobectomies performed without the employment of general anesthesia, through maintaining spontaneous ventilation, and with minimally sedated patients. Uniportal VATS resections under spontaneous ventilation probably represent the least invasive approach to operate lung cancer. PMID:25717231

  12. Recent advances in uniportal video-assisted thoracoscopic surgery

    PubMed Central

    2015-01-01

    Thanks to the recent improvements in video-assisted thoracoscopic techniques (VATS) and anesthetic procedures, a great deal of complex lung resections can be performed avoiding open surgery. The experience gained through VATS techniques, enhancement of the surgical instruments, improvement of high definition cameras and avoidance of intubated general anesthesia have been the greatest advances to minimize the trauma to the patient. Uniportal VATS for major resections has become a revolution in the treatment of lung pathologies since initially described 4 years ago. The huge number of surgical videos posted on specialized websites, live surgery events and experimental courses has contributed to the rapid learning of uniportal major thoracoscopic surgery during the last years. The future of the thoracic surgery is based on evolution of surgical procedures and anesthetic techniques to try to reduce the trauma to the patient. Further development of new technologies probably will focus on sealing devices for all vessels and fissure, refined staplers and instruments, improvements in 3D systems or wireless cameras, and robotic surgery. As thoracoscopic techniques continue to evolve exponentially, we can see the emergence of new approaches in the anesthetical and the perioperative management of these patients. Advances in anesthesia include lobectomies performed without the employment of general anesthesia, through maintaining spontaneous ventilation, and with minimally sedated patients. Uniportal VATS resections under spontaneous ventilation probably represent the least invasive approach to operate lung cancer. PMID:25717231

  13. Rendering-oriented multiview video coding based on chrominance information reconstruction

    NASA Astrophysics Data System (ADS)

    Shao, Feng; Yu, Mei; Jiang, Gangyi; Zhang, Zhaoyang

    2010-05-01

    Three-dimensional (3-D) video systems are expected to be a next-generation visual application. Since multiview video for 3-D video systems is composed of color and associated depth information, its huge requirement for data storage and transmission is an important problem. We propose a rendering-oriented multiview video coding (MVC) method based on chrominance information reconstruction that incorporates the rendering technique into the MVC process. The proposed method discards certain chrominance information to reduce bitrates, and performs reasonable bitrate allocation between color and depth videos. At the decoder, a chrominance reconstruction algorithm is presented to achieve accurate reconstruction by warping the neighboring views and colorizing the luminance-only pixels. Experimental results show that the proposed method can save nearly 20% on bitrates against the results without discarding the chrominance information. Moreover, under a fixed bitrate budget, the proposed method can greatly improve the rendering quality.

  14. Correlation channel modeling for practical Slepian-Wolf distributed video compression system using irregular LDPC codes

    NASA Astrophysics Data System (ADS)

    Li, Li; Hu, Xiao; Zeng, Rui

    2007-11-01

    The development of practical distributed video coding schemes is based on the consequence of information-theoretic bounds established in the 1970s by Slepian and Wolf for distributed lossless coding, and by Wyner and Ziv for lossy coding with decoder side information. In distributed video compression application, it is hard to accurately describe the non-stationary behavior of the virtual correlation channel between X and side information Y although it plays a very important role in overall system performance. In this paper, we implement a practical Slepian-Wolf asymmetric distributed video compression system using irregular LDPC codes. Moreover, based on exploiting the dependencies of previously decode bit planes from video frame X and side information Y, we present improvement schemes to divide different reliable regions. Our simulation results show improving schemes of exploiting the dependencies between previously decoded bit planes can get better overall encoding rate performance as BER approach zero. We also show, compared with BSC model, BC channel model is more suitable for distributed video compression scenario because of the non-stationary properties of the virtual correlation channel and adaptive detecting channel model parameters from previously adjacent decoded bit planes can provide more accurately initial belief messages from channel at LDPC decoder.

  15. Error-resilient video coding performance analysis of motion JPEG2000 and MPEG-4

    NASA Astrophysics Data System (ADS)

    Dufaux, Frederic; Ebrahimi, Touradj

    2004-01-01

    The new Motion JPEG 2000 standard is providing with some compelling features. It is based on an intra-frame wavelet coding, which makes it very well suited for wireless applications. Indeed, the state-of-the-art wavelet coding scheme achieves very high coding efficiency. In addition, Motion JPEG 2000 is very resilient to transmission errors as frames are coded independently (intra coding). Furthermore, it requires low complexity and introduces minimal coding delay. Finally, it supports very efficient scalability. In this paper, we analyze the performance of Motion JPEG 2000 in error-prone transmission. We compare it to the well-known MPEG-4 video coding scheme, in terms of coding efficiency, error resilience and complexity. We present experimental results which show that Motion JPEG 2000 outperforms MPEG-4 in the presence of transmission errors.

  16. Simulation and ground testing with the Advanced Video Guidance Sensor

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Johnston, Albert S.; Bryan, Thomas C.; Book, Michael L.

    2005-01-01

    The Advanced Video Guidance Sensor (AVGS), an active sensor system that provides near-range 6-degree-of-freedom sensor data, has been developed as part of an automatic rendezvous and docking system for the Demonstration of Autonomous Rendezvous Technology (DART). The sensor determines the relative positions and attitudes between the active sensor and the passive target at ranges up to 300 meters. The AVGS uses laser diodes to illuminate retro-reflectors in the target, a solid-state imager to detect the light returned from the target, and image capture electronics and a digital signal processor to convert the video information into the relative positions and attitudes. The development of the sensor, through initial prototypes, final prototypes, and three flight units, has required a great deal of testing at every phase, and the different types of testing, their effectiveness, and their results, are presented in this paper, focusing on the testing of the flight units. Testing has improved the sensor's performance.

  17. Advances in Video-Assisted Thoracic Surgery, Thoracoscopy.

    PubMed

    Case, Joseph Brad

    2016-01-01

    Video-assisted thoracic surgery (VATS) is an evolving modality in the treatment and management of a variety of pathologies affecting dogs and cats. Representative disease processes include pericardial effusion, pericardial neoplasia, cranial mediastinal neoplasia, vascular ring anomaly, pulmonary neoplasia, pulmonary blebs and bullae, spontaneous pneumothorax, and chylothorax. Several descriptive and small case reports have been published on the use of VATS in veterinary medicine. More recently, larger case series and experimental studies have revealed potential benefits and limitations not documented previously. Significant technological advances over the past 5 years have made possible a host of new applications in VATS. This article focuses on updates and cutting-edge applications in VATS. PMID:26410560

  18. Advanced hyperspectral video imaging system using Amici prism.

    PubMed

    Feng, Jiao; Fang, Xiaojing; Cao, Xun; Ma, Chenguang; Dai, Qionghai; Zhu, Hongbo; Wang, Yongjin

    2014-08-11

    In this paper, we propose an advanced hyperspectral video imaging system (AHVIS), which consists of an objective lens, an occlusion mask, a relay lens, an Amici prism and two cameras. An RGB camera is used for spatial reading and a gray scale camera is used for measuring the scene with spectral information. The objective lens collects more light energy from the observed scene and images the scene on an occlusion mask, which subsamples the image of the observed scene. Then, the subsampled image is sent to the gray scale camera through the relay lens and the Amici prism. The Amici prism that is used to realize spectral dispersion along the optical path reduces optical distortions and offers direct view of the scene. The main advantages of the proposed system are improved light throughput and less optical distortion. Furthermore, the presented configuration is more compact, robust and practicable. PMID:25321019

  19. Just noticeable disparity error-based depth coding for three-dimensional video

    NASA Astrophysics Data System (ADS)

    Luo, Lei; Tian, Xiang; Chen, Yaowu

    2014-07-01

    A just noticeable disparity error (JNDE) measurement to describe the maximum tolerated error of depth maps is proposed. Any error of depth value inside the JNDE range would not cause a noticeable distortion observed by human eyes. The JNDE values are used to preprocess the original depth map in the prediction process during the depth coding and to adjust the prediction residues for further improvement of the coding quality. The proposed scheme can be incorporated in any standardized video coding algorithm based on prediction and transform. The experimental results show that the proposed method can achieve a 34% bit rate saving for depth video coding. Moreover, the perceptual quality of the synthesized view is also improved by the proposed method.

  20. Investigation of perception-oriented coding techniques for video compression based on large block structures

    NASA Astrophysics Data System (ADS)

    Kaprykowsky, Hagen; Doshkov, Dimitar; Hoffmann, Christoph; Ndjiki-Nya, Patrick; Wiegand, Thomas

    2011-09-01

    Recent investigations have shown that one of the most beneficial elements for higher compression performance in highresolution video is the incorporation of larger block structures. In this work, we will address the question of how to incorporate perceptual aspects into new video coding schemes based on large block structures. This is rooted in the fact that especially high frequency regions such as textures yield high coding costs when using classical prediction modes as well as encoder control based on the mean squared error. To overcome this problem, we will investigate the incorporation of novel intra predictors based on image completion methods. Furthermore, the integration of a perceptualbased encoder control using the well-known structural similarity index will be analyzed. A major aspect of this article is the evaluation of the coding results in a quantitative (i.e. statistical analysis of changes in mode decisions) as well as qualitative (i.e. coding efficiency) manner.

  1. Investigating the structure preserving encryption of high efficiency video coding (HEVC)

    NASA Astrophysics Data System (ADS)

    Shahid, Zafar; Puech, William

    2013-02-01

    This paper presents a novel method for the real-time protection of new emerging High Efficiency Video Coding (HEVC) standard. Structure preserving selective encryption is being performed in CABAC entropy coding module of HEVC, which is significantly different from CABAC entropy coding of H.264/AVC. In CABAC of HEVC, exponential Golomb coding is replaced by truncated Rice (TR) up to a specific value for binarization of transform coefficients. Selective encryption is performed using AES cipher in cipher feedback mode on a plaintext of binstrings in a context aware manner. The encrypted bitstream has exactly the same bit-rate and is format complaint. Experimental evaluation and security analysis of the proposed algorithm is performed on several benchmark video sequences containing different combinations of motion, texture and objects.

  2. Experimental design and analysis of JND test on coded image/video

    NASA Astrophysics Data System (ADS)

    Lin, Joe Yuchieh; Jin, Lina; Hu, Sudeng; Katsavounidis, Ioannis; Li, Zhi; Aaron, Anne; Kuo, C.-C. Jay

    2015-09-01

    The visual Just-Noticeable-Difference (JND) metric is characterized by the detectable minimum amount of two visual stimuli. Conducting the subjective JND test is a labor-intensive task. In this work, we present a novel interactive method in performing the visual JND test on compressed image/video. JND has been used to enhance perceptual visual quality in the context of image/video compression. Given a set of coding parameters, a JND test is designed to determine the distinguishable quality level against a reference image/video, which is called the anchor. The JND metric can be used to save coding bitrates by exploiting the special characteristics of the human visual system. The proposed JND test is conducted using a binary-forced choice, which is often adopted to discriminate the difference in perception in a psychophysical experiment. The assessors are asked to compare coded image/video pairs and determine whether they are of the same quality or not. A bisection procedure is designed to find the JND locations so as to reduce the required number of comparisons over a wide range of bitrates. We will demonstrate the efficiency of the proposed JND test, report experimental results on the image and video JND tests.

  3. Motion estimation optimization tools for the emerging high efficiency video coding (HEVC)

    NASA Astrophysics Data System (ADS)

    Abdelazim, Abdelrahman; Masri, Wassim; Noaman, Bassam

    2014-02-01

    Recent development in hardware and software allowed a new generation of video quality. However, the development in networking and digital communication is lagging behind. This prompted the establishment of the Joint Collaborative Team on Video Coding (JCT-VC), with an objective to develop a new high-performance video coding standard. A primary reason for developing the HEVC was to enable efficient processing and transmission for HD videos that normally contain large smooth areas; therefore, the HEVC utilizes larger encoding blocks than the previous standard to enable more effective encoding, while smaller blocks are still exploited to encode fast/complex areas of video more efficiently. Hence, the implementation of the encoder investigates all the possible block sizes. This and many added features on the new standard have led to significant increase in the complexity of the encoding process. Furthermore, there is not an automated process to decide on when large blocks or small blocks should be exploited. To overcome this problem, this research proposes a set of optimization tools to reduce the encoding complexity while maintaining the same quality and compression rate. The method automates this process through a set of hierarchical steps yet using the standard refined coding tools.

  4. Comparing Simple and Advanced Video Tools as Supports for Complex Collaborative Design Processes

    ERIC Educational Resources Information Center

    Zahn, Carmen; Pea, Roy; Hesse, Friedrich W.; Rosen, Joe

    2010-01-01

    Working with digital video technologies, particularly advanced video tools with editing capabilities, offers new prospects for meaningful learning through design. However, it is also possible that the additional complexity of such tools does "not" advance learning. We compared in an experiment the design processes and learning outcomes of 24…

  5. Advanced coding and modulation schemes for TDRSS

    NASA Astrophysics Data System (ADS)

    Harrell, Linda; Kaplan, Ted; Berman, Ted; Chang, Susan

    1993-11-01

    This paper describes the performance of the Ungerboeck and pragmatic 8-Phase Shift Key (PSK) Trellis Code Modulation (TCM) coding techniques with and without a (255,223) Reed-Solomon outer code as they are used for Tracking Data and Relay Satellite System (TDRSS) S-Band and Ku-Band return services. The performance of these codes at high data rates is compared to uncoded Quadrature PSK (QPSK) and rate 1/2 convolutionally coded QPSK in the presence of Radio Frequency Interference (RFI), self-interference, and hardware distortions. This paper shows that the outer Reed-Solomon code is necessary to achieve a 10(exp -5) Bit Error Rate (BER) with an acceptable level of degradation in the presence of RFI. This paper also shows that the TCM codes with or without the Reed-Solomon outer code do not perform well in the presence of self-interference. In fact, the uncoded QPSK signal performs better than the TCM coded signal in the self-interference situation considered in this analysis. Finally, this paper shows that the E(sub b)/N(sub 0) degradation due to TDRSS hardware distortions is approximately 1.3 dB with a TCM coded signal or a rate 1/2 convolutionally coded QPSK signal and is 3.2 dB with an uncoded QPSK signal.

  6. Compact all-CMOS spatiotemporal compressive sensing video camera with pixel-wise coded exposure.

    PubMed

    Zhang, Jie; Xiong, Tao; Tran, Trac; Chin, Sang; Etienne-Cummings, Ralph

    2016-04-18

    We present a low power all-CMOS implementation of temporal compressive sensing with pixel-wise coded exposure. This image sensor can increase video pixel resolution and frame rate simultaneously while reducing data readout speed. Compared to previous architectures, this system modulates pixel exposure at the individual photo-diode electronically without external optical components. Thus, the system provides reduction in size and power compare to previous optics based implementations. The prototype image sensor (127 × 90 pixels) can reconstruct 100 fps videos from coded images sampled at 5 fps. With 20× reduction in readout speed, our CMOS image sensor only consumes 14μW to provide 100 fps videos. PMID:27137331

  7. Region-of-interest based rate control for UAV video coding

    NASA Astrophysics Data System (ADS)

    Zhao, Chun-lei; Dai, Ming; Xiong, Jing-ying

    2016-05-01

    To meet the requirement of high-quality transmission of videos captured by unmanned aerial vehicles (UAV) with low bandwidth, a novel rate control (RC) scheme based on region-of-interest (ROI) is proposed. First, the ROI information is sent to the encoder with the latest high efficient video coding (HEVC) standard to generate an ROI map. Then, by using the ROI map, bit allocation methods are developed at frame level and large coding unit (LCU) level, to avoid inaccurate bit allocation produced by camera movement. At last, by using a better robustness R- λ model, the quantization parameter ( QP) for each LCU is calculated. The experimental results show that the proposed RC method can get a lower bitrate error and a higher quality for reconstructed video by choosing appropriate pixel weight on the HEVC platform.

  8. Self-images in the video monitor coded by monkey intraparietal neurons.

    PubMed

    Iriki, A; Tanaka, M; Obayashi, S; Iwamura, Y

    2001-06-01

    When playing a video game, or using a teleoperator system, we feel our self-image projected into the video monitor as a part of or an extension of ourselves. Here we show that such a self image is coded by bimodal (somatosensory and visual) neurons in the monkey intraparietal cortex, which have visual receptive fields (RFs) encompassing their somatosensory RFs. We earlier showed these neurons to code the schema of the hand which can be altered in accordance with psychological modification of the body image; that is, when the monkey used a rake as a tool to extend its reach, the visual RFs of these neurons elongated along the axis of the tool, as if the monkey's self image extended to the end of the tool. In the present experiment, we trained monkeys to recognize their image in a video monitor (despite the earlier general belief that monkeys are not capable of doing so), and demonstrated that the visual RF of these bimodal neurons was now projected onto the video screen so as to code the image of the hand as an extension of the self. Further, the coding of the imaged hand could intentionally be altered to match the image artificially modified in the monitor. PMID:11377755

  9. Advanced Video Guidance Sensor and Next Generation Autonomous Docking Sensors

    NASA Technical Reports Server (NTRS)

    Granade, Stephen R.

    2004-01-01

    In recent decades, NASA's interest in spacecraft rendezvous and proximity operations has grown. Additional instrumentation is needed to improve manned docking operations' safety, as well as to enable telerobotic operation of spacecraft or completely autonomous rendezvous and docking. To address this need, Advanced Optical Systems, Inc., Orbital Sciences Corporation, and Marshall Space Flight Center have developed the Advanced Video Guidance Sensor (AVGS) under the auspices of the Demonstration of Autonomous Rendezvous Technology (DART) program. Given a cooperative target comprising several retro-reflectors, AVGS provides six-degree-of-freedom information at ranges of up to 300 meters for the DART target. It does so by imaging the target, then performing pattern recognition on the resulting image. Longer range operation is possible through different target geometries. Now that AVGS is being readied for its test flight in 2004, the question is: what next? Modifications can be made to AVGS, including different pattern recognition algorithms and changes to the retro-reflector targets, to make it more robust and accurate. AVGS could be coupled with other space-qualified sensors, such as a laser range-and-bearing finder, that would operate at longer ranges. Different target configurations, including the use of active targets, could result in significant miniaturization over the current AVGS package. We will discuss these and other possibilities for a next-generation docking sensor or sensor suite that involve AVGS.

  10. Joint source-channel coding for wireless object-based video communications utilizing data hiding.

    PubMed

    Wang, Haohong; Tsaftaris, Sotirios A; Katsaggelos, Aggelos K

    2006-08-01

    In recent years, joint source-channel coding for multimedia communications has gained increased popularity. However, very limited work has been conducted to address the problem of joint source-channel coding for object-based video. In this paper, we propose a data hiding scheme that improves the error resilience of object-based video by adaptively embedding the shape and motion information into the texture data. Within a rate-distortion theoretical framework, the source coding, channel coding, data embedding, and decoder error concealment are jointly optimized based on knowledge of the transmission channel conditions. Our goal is to achieve the best video quality as expressed by the minimum total expected distortion. The optimization problem is solved using Lagrangian relaxation and dynamic programming. The performance of the proposed scheme is tested using simulations of a Rayleigh-fading wireless channel, and the algorithm is implemented based on the MPEG-4 verification model. Experimental results indicate that the proposed hybrid source-channel coding scheme significantly outperforms methods without data hiding or unequal error protection. PMID:16900673

  11. Parallel Processing of Distributed Video Coding to Reduce Decoding Time

    NASA Astrophysics Data System (ADS)

    Tonomura, Yoshihide; Nakachi, Takayuki; Fujii, Tatsuya; Kiya, Hitoshi

    This paper proposes a parallelized DVC framework that treats each bitplane independently to reduce the decoding time. Unfortunately, simple parallelization generates inaccurate bit probabilities because additional side information is not available for the decoding of subsequent bitplanes, which degrades encoding efficiency. Our solution is an effective estimation method that can calculate the bit probability as accurately as possible by index assignment without recourse to side information. Moreover, we improve the coding performance of Rate-Adaptive LDPC (RA-LDPC), which is used in the parallelized DVC framework. This proposal selects a fitting sparse matrix for each bitplane according to the syndrome rate estimation results at the encoder side. Simulations show that our parallelization method reduces the decoding time by up to 35[%] and achieves a bit rate reduction of about 10[%].

  12. Applications of just-noticeable depth difference model in joint multiview video plus depth coding

    NASA Astrophysics Data System (ADS)

    Liu, Chao; An, Ping; Zuo, Yifan; Zhang, Zhaoyang

    2014-10-01

    A new multiview just-noticeable-depth-difference(MJNDD) Model is presented and applied to compress the joint multiview video plus depth. Many video coding algorithms remove spatial and temporal redundancies and statistical redundancies but they are not capable of removing the perceptual redundancies. Since the final receptor of video is the human eyes, we can remove the perception redundancy to gain higher compression efficiency according to the properties of human visual system (HVS). Traditional just-noticeable-distortion (JND) model in pixel domain contains luminance contrast and spatial-temporal masking effects, which describes the perception redundancy quantitatively. Whereas HVS is very sensitive to depth information, a new multiview-just-noticeable-depth-difference(MJNDD) model is proposed by combining traditional JND model with just-noticeable-depth-difference (JNDD) model. The texture video is divided into background and foreground areas using depth information. Then different JND threshold values are assigned to these two parts. Later the MJNDD model is utilized to encode the texture video on JMVC. When encoding the depth video, JNDD model is applied to remove the block artifacts and protect the edges. Then we use VSRS3.5 (View Synthesis Reference Software) to generate the intermediate views. Experimental results show that our model can endure more noise and the compression efficiency is improved by 25.29 percent at average and by 54.06 percent at most compared to JMVC while maintaining the subject quality. Hence it can gain high compress ratio and low bit rate.

  13. A video coding scheme based on joint spatiotemporal and adaptive prediction.

    PubMed

    Jiang, Wenfei; Latecki, Longin Jan; Liu, Wenyu; Liang, Hui; Gorman, Ken

    2009-05-01

    We propose a video coding scheme that departs from traditional Motion Estimation/DCT frameworks and instead uses Karhunen-Loeve Transform (KLT)/Joint Spatiotemporal Prediction framework. In particular, a novel approach that performs joint spatial and temporal prediction simultaneously is introduced. It bypasses the complex H.26x interframe techniques and it is less computationally intensive. Because of the advantage of the effective joint prediction and the image-dependent color space transformation (KLT), the proposed approach is demonstrated experimentally to consistently lead to improved video quality, and in many cases to better compression rates and improved computational speed. PMID:19342337

  14. Advanced Imaging Optics Utilizing Wavefront Coding.

    SciTech Connect

    Scrymgeour, David; Boye, Robert; Adelsberger, Kathleen

    2015-06-01

    Image processing offers a potential to simplify an optical system by shifting some of the imaging burden from lenses to the more cost effective electronics. Wavefront coding using a cubic phase plate combined with image processing can extend the system's depth of focus, reducing many of the focus-related aberrations as well as material related chromatic aberrations. However, the optimal design process and physical limitations of wavefront coding systems with respect to first-order optical parameters and noise are not well documented. We examined image quality of simulated and experimental wavefront coded images before and after reconstruction in the presence of noise. Challenges in the implementation of cubic phase in an optical system are discussed. In particular, we found that limitations must be placed on system noise, aperture, field of view and bandwidth to develop a robust wavefront coded system.

  15. Detailed Precipitation Measurements for GV: Advances in Video-Distrometers

    NASA Astrophysics Data System (ADS)

    Schwinzerl, Martin; Lammer, Günter; Schönhuber, Michael

    2014-05-01

    The 2D-Video-Distrometer (2DVD) is an established instrument for in-situ measurements of precipitation, delivering per-particle data for solid, liquid and mixed-phase precipitation and having over 80 successful deployments world-wide to its record. At its core, two orthogonally oriented, vertically displaced and precisely aligned high-speed cameras sample hydrometeors like rain, snow, hail, graupel, ice-pellets, etc. as they fall through a sampling area of approx. 100 cm². This measurement principle, i.e. having two projections for each detected particle while gathering statistically significant data by sampling over a substantial measurement area, allows capturing and evaluation of observables like diameter, oblateness and shape, vertical velocity, and contributions to the rain rate and to the cumulative amount of rain for each individual detected particle. If particles display rotational symmetry, estimation of horizontal velocity and (for particles exceeding a diameters of approx. 1.5 mm) canting angles can be gauged, again on a per-hydrometeor basis, as well. While the 2DVD has been successfully deployed during many ground validation campaigns, some of the inherent cost and complexity constraints have so far prevented the use of 2DVD's for some applications and in some environments. In order to address these limitations of the 2DVD, research has been conducted to develop a 1D-Video-Distrometer (1DVD) which employs only one camera system but tries to retain the capability to capture as many observables on a per-particle basis as possible. First results from our activities towards such a system with reduced complexity and deployment costs are presented and comparison of data sets gathered with both 1DVD and current generation 2DVD systems are provided. Current generations of the 2DVD can yield exceptionally high data rates, especially during extreme rain events like for example tropical storms. Therefore, the software suite which accompanies each device employs

  16. Standard-Compliant Multiple Description Video Coding over Packet Loss Network

    NASA Astrophysics Data System (ADS)

    Bai, Huihui; Zhao, Yao; Zhang, Mengmeng

    2010-12-01

    An effective scheme of multiple description video coding is proposed for transmission over packet loss network. Using priority encoding transmission, we attempt to overcome the limitation of specific scalable video codec and apply FEC-based multiple description to a common video coder, such as the standard H.264. Firstly, multiple descriptions can be generated using temporal downsampling and the frame with high motion changing is duplicated in each description. Then according to different motion characteristics between frames, each description can be divided into several messages, so in each message better temporal correlation can be maintained for better estimation when information losses occur. Based on priority encoding transmission, unequal protections are assigned in each message. Furthermore, the priority is designed in view of packet loss rate of channels and the significance of bit streams. Experimental results validate the effectiveness of the proposed scheme with better performance than the equal protection scheme and other state-of-the-art methods.

  17. An edge-based temporal error concealment for MPEG-coded video

    NASA Astrophysics Data System (ADS)

    Huang, Yu-Len; Lien, Hsiu-Yi

    2005-07-01

    When transmitted over unreliable channels, the compressed video can suffer severe degradation. Some strategies were employed to make an acceptable quality of the decoded image sequence. Error concealment (EC) technique is one of effective approaches to diminish the quality degradation. A number of EC algorithms have been developed to combat the transmission errors for MPEG-coded video. These methods always work well to reconstruct the smooth or regular damaged macroblocks. However, for damaged macroblocks were irregular or high-detail, the reconstruction may follow noticeable blurring consequence or not match well with the surrounding macroblocks. This paper proposes an edgebased temporal EC model to conceal the errors. In the proposed method, both the spatial and the temporal contextual features in compressed video are measured by using an edge detector, i.e. Sobel operator. The edge information surrounding a damaged macroblock is utilized to estimate the lost motion vectors based on the boundary matching technique. Next, the estimated motion vectors are used to reconstruct the damaged macroblock by exploiting the information in reference frames. In comparison with traditional EC algorithms, the proposed method provides a significant improvement on both objective peak signal-to-noise ratio (PSNR) measurement and subjective visual quality of MPEG-coded video.

  18. Mixture block coding with progressive transmission in packet video. Appendix 1: Item 2. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Chen, Yun-Chung

    1989-01-01

    Video transmission will become an important part of future multimedia communication because of dramatically increasing user demand for video, and rapid evolution of coding algorithm and VLSI technology. Video transmission will be part of the broadband-integrated services digital network (B-ISDN). Asynchronous transfer mode (ATM) is a viable candidate for implementation of B-ISDN due to its inherent flexibility, service independency, and high performance. According to the characteristics of ATM, the information has to be coded into discrete cells which travel independently in the packet switching network. A practical realization of an ATM video codec called Mixture Block Coding with Progressive Transmission (MBCPT) is presented. This variable bit rate coding algorithm shows how a constant quality performance can be obtained according to user demand. Interactions between codec and network are emphasized including packetization, service synchronization, flow control, and error recovery. Finally, some simulation results based on MBCPT coding with error recovery are presented.

  19. Next Generation Advanced Video Guidance Sensor: Low Risk Rendezvous and Docking Sensor

    NASA Technical Reports Server (NTRS)

    Lee, Jimmy; Carrington, Connie; Spencer, Susan; Bryan, Thomas; Howard, Ricky T.; Johnson, Jimmie

    2008-01-01

    The Next Generation Advanced Video Guidance Sensor (NGAVGS) is being built and tested at MSFC. This paper provides an overview of current work on the NGAVGS, a summary of the video guidance heritage, and the AVGS performance on the Orbital Express mission. This paper also provides a discussion of applications to ISS cargo delivery vehicles, CEV, and future lunar applications.

  20. Lights, Camera, Action: Advancing Learning, Research, and Program Evaluation through Video Production in Educational Leadership Preparation

    ERIC Educational Resources Information Center

    Friend, Jennifer; Militello, Matthew

    2015-01-01

    This article analyzes specific uses of digital video production in the field of educational leadership preparation, advancing a three-part framework that includes the use of video in (a) teaching and learning, (b) research methods, and (c) program evaluation and service to the profession. The first category within the framework examines videos…

  1. Video segmentation using spatial proximity, color, and motion information for region-based coding

    NASA Astrophysics Data System (ADS)

    Hong, Won H.; Kim, Nam Chul; Lee, Sang-Mi

    1994-09-01

    An efficient video segmentation algorithm with homogeneity measure to incorporate spatial proximity, color, and motion information simultaneously is presented for region-based coding. The procedure toward complete segmentation consists of two steps: primary segmentation, and secondary segmentation. In the primary segmentation, an input image is finely segmented by FSCL. In the secondary segmentation, a lot of small regions and similar regions generated in the preceding step are eliminated or merged by a fast RSST. Through some experiments, it is found that the proposed algorithm produces efficient segmentation results and the video coding system with this algorithm yields visually acceptable quality and PSNR equals 36 - 37 dB at a very low bitrate of about 13.2 kbits/s.

  2. Joint wavelet-based coding and packetization for video transport over packet-switched networks

    NASA Astrophysics Data System (ADS)

    Lee, Hung-ju

    1996-02-01

    In recent years, wavelet theory applied to image, and audio and video compression has been extensively studied. However, only gaining compression ratio without considering the underlying networking systems is unrealistic, especially for multimedia applications over networks. In this paper, we present an integrated approach, which attempts to preserve the advantages of wavelet-based image coding scheme and to provide robustness to a certain extent for lost packets over packet-switched networks. Two different packetization schemes, called the intrablock-oriented (IAB) and interblock-oriented (IRB) schemes, in conjunction with wavelet-based coding, are presented. Our approach is evaluated under two different packet loss models with various packet loss probabilities through simulations which are driven by real video sequences.

  3. Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos

    NASA Astrophysics Data System (ADS)

    Xu, Dawen; Wang, Rangding

    2015-05-01

    A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.

  4. Use of FEC coding to improve statistical multiplexing performance for video transport over ATM networks

    NASA Astrophysics Data System (ADS)

    Kurceren, Ragip; Modestino, James W.

    1998-12-01

    The use of forward error-control (FEC) coding, possibly in conjunction with ARQ techniques, has emerged as a promising approach for video transport over ATM networks for cell-loss recovery and/or bit error correction, such as might be required for wireless links. Although FEC provides cell-loss recovery capabilities it also introduces transmission overhead which can possibly cause additional cell losses. A methodology is described to maximize the number of video sources multiplexed at a given quality of service (QoS), measured in terms of decoded cell loss probability, using interlaced FEC codes. The transport channel is modelled as a block interference channel (BIC) and the multiplexer as single server, deterministic service, finite buffer supporting N users. Based upon an information-theoretic characterization of the BIC and large deviation bounds on the buffer overflow probability, the described methodology provides theoretically achievable upper limits on the number of sources multiplexed. Performance of specific coding techniques using interlaced nonbinary Reed-Solomon (RS) codes and binary rate-compatible punctured convolutional (RCPC) codes is illustrated.

  5. (Video 8 of 8) Omics: Advancing Personalized Medicine from Space to Earth

    NASA Video Gallery

    NASA’s Human Research Program (HRP) is releasing the video “Omics: Advancing Personalized Medicine from Space to Earth”, to highlight its Twins Study, coinciding with National Twins Days. This is t...

  6. DMMFast: a complexity reduction scheme for three-dimensional high-efficiency video coding intraframe depth map coding

    NASA Astrophysics Data System (ADS)

    Sanchez, Gustavo; Saldanha, Mário; Balota, Gabriel; Zatt, Bruno; Porto, Marcelo; Agostini, Luciano

    2015-03-01

    We present a complexity reduction scheme for the depth map intraprediction of three-dimensional high-efficiency video coding (3-D-HEVC). The 3-D-HEVC introduces a new set of specific tools for depth map coding, inserting additional complexity to intraprediction, which results in new challenges in terms of complexity reduction. Therefore, we present the DMMFast (depth modeling modes fast prediction), a scheme composed of two new algorithms: the simplified edge detector (SED) and the gradient-based mode one filter (GMOF). The SED anticipates the blocks that are likely to be better predicted by the traditional intramodes, avoiding the evaluation of DMMs. The GMOF applies a gradient-based filter in the borders of the block and predicts the best positions to evaluate the DMM 1. Software evaluations showed that DMMFast is capable of achieving a time saving of 11.9% on depth map intraprediction, considering the random access mode, without affecting the quality of the synthesized views. Considering the all intraconfigurations, the proposed scheme is capable of achieving, on average, a time saving of 35% considering the whole encoder. Subjective quality assessment was also performed, showing that the proposed technique inserts minimal quality losses in the final encoded video.

  7. Progress in Advanced Spray Combustion Code Integration

    NASA Technical Reports Server (NTRS)

    Liang, Pak-Yan

    1993-01-01

    A multiyear project to assemble a robust, muitiphase spray combustion code is now underway and gradually building up to full speed. The overall effort involves several university and government research teams as well as Rocketdyne. The first part of this paper will give an overview of the respective roles of the different participants involved, the master strategy, the evolutionary milestones, and an assessment of the state-of-the-art of various key components. The second half of this paper will highlight the progress made to date in extending the baseline Navier-Stokes solver to handle multiphase, multispecies, chemically reactive sub- to supersonic flows. The major hurdles to overcome in order to achieve significant speed ups are delineated and the approaches to overcoming them will be discussed.

  8. A Novel Macroblock Level Rate Control Method for Stereo Video Coding

    PubMed Central

    Zhu, Gaofeng; Jiang, Gangyi; Peng, Zongju; Shao, Feng; Chen, Fen; Ho, Yo-Sung

    2014-01-01

    To compress stereo video effectively, this paper proposes a novel macroblock (MB) level rate control method based on binocular perception. A binocular just-notification difference (BJND) model based on the parallax matching is first used to describe binocular perception. Then, the proposed rate control method is performed in stereo video coding with four levels, namely, view level, group-of-pictures (GOP) level, frame level, and MB level. In the view level, different proportions of bitrates are allocated for the left and right views of stereo video according to the prestatistical rate allocation proportion. In the GOP level, the total number of bitrates allocated to each GOP is computed and the initial quantization parameter of each GOP is set. In the frame level, the target bits allocated to each frame are computed. In the MB level, visual perception factor, which is measured by the BJND value of MB, is used to adjust the MB level bit allocation, so that the rate control results in line with the human visual characteristics. Experimental results show that the proposed method can control the bitrate more accurately and get better subjective quality of stereo video, compared with other methods. PMID:24737956

  9. 3-D model-based frame interpolation for distributed video coding of static scenes.

    PubMed

    Maitre, Matthieu; Guillemot, Christine; Morin, Luce

    2007-05-01

    This paper addresses the problem of side information extraction for distributed coding of videos captured by a camera moving in a 3-D static environment. Examples of targeted applications are augmented reality, remote-controlled robots operating in hazardous environments, or remote exploration by drones. It explores the benefits of the structure-from-motion paradigm for distributed coding of this type of video content. Two interpolation methods constrained by the scene geometry, based either on block matching along epipolar lines or on 3-D mesh fitting, are first developed. These techniques are based on a robust algorithm for sub-pel matching of feature points, which leads to semi-dense correspondences between key frames. However, their rate-distortion (RD) performances are limited by misalignments between the side information and the actual Wyner-Ziv (WZ) frames due to the assumption of linear motion between key frames. To cope with this problem, two feature point tracking techniques are introduced, which recover the camera parameters of the WZ frames. A first technique, in which the frames remain encoded separately, performs tracking at the decoder and leads to significant RD performance gains. A second technique further improves the RD performances by allowing a limited tracking at the encoder. As an additional benefit, statistics on tracks allow the encoder to adapt the key frame frequency to the video motion content. PMID:17491456

  10. Picture quality measurement based on block visibility in discrete cosine transform-coded video sequences

    NASA Astrophysics Data System (ADS)

    Coudoux, Francois-Xavier; Gazalet, Marc G.; Derviaux, Christian; Corlay, Patrick

    2001-04-01

    In this paper, we present a perceptual measures that predicts the visibility of the well-known blocking effect in discrete cosine transform coded image sequences. The main objective of this work is to use the results of the measure derived for adaptive video postprocessing, in order to significantly improve the visual quality of the video decoded sequences at the receiver. The proposed measure is based on a visual model accounting for both the spatial and temporal properties of the human visual system. The input of the visual model is the distorted sequence only. Psycho- visual experiments have been carried out to determine the eye sensitivity to blocking artifacts, by varying a number of visually significant parameters: background level, spatial, and temporal activities in the surrounding image. Results obtained for the measurement of the viability thresholds enable us to estimate the model parameters. The visual model is finally applied to real coded video sequences. The comparison of measurement results with subjective tests shows that proposed perceptual measure has a good correlation with subjective evaluation.

  11. Foundational development of an advanced nuclear reactor integrated safety code.

    SciTech Connect

    Clarno, Kevin; Lorber, Alfred Abraham; Pryor, Richard J.; Spotz, William F.; Schmidt, Rodney Cannon; Belcourt, Kenneth; Hooper, Russell Warren; Humphries, Larry LaRon

    2010-02-01

    This report describes the activities and results of a Sandia LDRD project whose objective was to develop and demonstrate foundational aspects of a next-generation nuclear reactor safety code that leverages advanced computational technology. The project scope was directed towards the systems-level modeling and simulation of an advanced, sodium cooled fast reactor, but the approach developed has a more general applicability. The major accomplishments of the LDRD are centered around the following two activities. (1) The development and testing of LIME, a Lightweight Integrating Multi-physics Environment for coupling codes that is designed to enable both 'legacy' and 'new' physics codes to be combined and strongly coupled using advanced nonlinear solution methods. (2) The development and initial demonstration of BRISC, a prototype next-generation nuclear reactor integrated safety code. BRISC leverages LIME to tightly couple the physics models in several different codes (written in a variety of languages) into one integrated package for simulating accident scenarios in a liquid sodium cooled 'burner' nuclear reactor. Other activities and accomplishments of the LDRD include (a) further development, application and demonstration of the 'non-linear elimination' strategy to enable physics codes that do not provide residuals to be incorporated into LIME, (b) significant extensions of the RIO CFD code capabilities, (c) complex 3D solid modeling and meshing of major fast reactor components and regions, and (d) an approach for multi-physics coupling across non-conformal mesh interfaces.

  12. Protection of HEVC Video Delivery in Vehicular Networks with RaptorQ Codes

    PubMed Central

    Martínez-Rach, Miguel; López, Otoniel; Malumbres, Manuel Pérez

    2014-01-01

    With future vehicles equipped with processing capability, storage, and communications, vehicular networks will become a reality. A vast number of applications will arise that will make use of this connectivity. Some of them will be based on video streaming. In this paper we focus on HEVC video coding standard streaming in vehicular networks and how it deals with packet losses with the aid of RaptorQ, a Forward Error Correction scheme. As vehicular networks are packet loss prone networks, protection mechanisms are necessary if we want to guarantee a minimum level of quality of experience to the final user. We have run simulations to evaluate which configurations fit better in this type of scenarios. PMID:25136675

  13. A modified prediction scheme of the H.264 multiview video coding to improve the decoder performance

    NASA Astrophysics Data System (ADS)

    Hamadan, Ayman M.; Aly, Hussein A.; Fouad, Mohamed M.; Dansereau, Richard M.

    2013-02-01

    In this paper, we present a modified inter-view prediction scheme for the multiview video coding (MVC).With more inter-view prediction, the number of reference frames required to decode a single view increase. Consequently, the data size of decoding a single view increases, thus impacting the decoder performance. In this paper, we propose an MVC scheme that requires less inter-view prediction than that of the MVC standard scheme. The proposed scheme is implemented and tested on real multiview video sequences. Improvements are shown using the proposed scheme in terms of average data size required either to decode a single view, or to access any frame (i.e., random access), with comparable rate-distortion. It is compared to the MVC standard scheme and another improved techniques from the literature.

  14. Advances in space radiation shielding codes

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Tripathi, Ram K.; Qualls, Garry D.; Cucinotta, Francis A.; Prael, Richard E.; Norbury, John W.; Heinbockel, John H.; Tweed, John; De Angelis, Giovanni

    2002-01-01

    Early space radiation shield code development relied on Monte Carlo methods and made important contributions to the space program. Monte Carlo methods have resorted to restricted one-dimensional problems leading to imperfect representation of appropriate boundary conditions. Even so, intensive computational requirements resulted and shield evaluation was made near the end of the design process. Resolving shielding issues usually had a negative impact on the design. Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary concept to the final design. For the last few decades, we have pursued deterministic solutions of the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard Finite Element Method (FEM) geometry common to engineering design methods. A single ray trace in such geometry requires 14 milliseconds and limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given.

  15. Advances in space radiation shielding codes.

    PubMed

    Wilson, John W; Tripathi, Ram K; Qualls, Garry D; Cucinotta, Francis A; Prael, Richard E; Norbury, John W; Heinbockel, John H; Tweed, John; De Angelis, Giovanni

    2002-12-01

    Early space radiation shield code development relied on Monte Carlo methods and made important contributions to the space program. Monte Carlo methods have resorted to restricted one-dimensional problems leading to imperfect representation of appropriate boundary conditions. Even so, intensive computational requirements resulted and shield evaluation was made near the end of the design process. Resolving shielding issues usually had a negative impact on the design. Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary concept to the final design. For the last few decades, we have pursued deterministic solutions of the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard Finite Element Method (FEM) geometry common to engineering design methods. A single ray trace in such geometry requires 14 milliseconds and limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given. PMID:12793737

  16. Low bit rate video coding using robust motion vector regeneration in the decoder.

    PubMed

    Banham, M R; Brailean, J C; Chan, C L; Katsaggelos, A K

    1994-01-01

    In this paper, we present a novel coding technique that makes use of the nonstationary characteristics of an image sequence displacement field to estimate and encode motion information. We utilize an MPEG style codec in which the anchor frames in a sequence are encoded with a hybrid approach using quadtree, DCT, and wavelet-based coding techniques. A quadtree structured approach is also utilized for the interframe information. The main objective of the overall design is to demonstrate the coding potential of a newly developed motion estimator called the coupled linearized MAP (CLMAP) estimator. This estimator can be used as a means for producing motion vectors that may be regenerated at the decoder with a coarsely quantized error term created in the encoder. The motion estimator generates highly accurate motion estimates from this coarsely quantized data. This permits the elimination of a separately coded displaced frame difference (DFD) and coded motion vectors. For low bit rate applications, this is especially important because the overhead associated with the transmission of motion vectors may become prohibitive. We exploit both the advantages of the nonstationary motion estimator and the effective compression of the anchor frame coder to improve the visual quality of reconstructed QCIF format color image sequences at low bit rates. Comparisons are made with other video coding methods, including the H.261 and MPEG standards and a pel-recursive-based codec. PMID:18291958

  17. Recent advances in the Mercury Monte Carlo particle transport code

    SciTech Connect

    Brantley, P. S.; Dawson, S. A.; McKinley, M. S.; O'Brien, M. J.; Stevens, D. E.; Beck, B. R.; Jurgenson, E. D.; Ebbers, C. A.; Hall, J. M.

    2013-07-01

    We review recent physics and computational science advances in the Mercury Monte Carlo particle transport code under development at Lawrence Livermore National Laboratory. We describe recent efforts to enable a nuclear resonance fluorescence capability in the Mercury photon transport. We also describe recent work to implement a probability of extinction capability into Mercury. We review the results of current parallel scaling and threading efforts that enable the code to run on millions of MPI processes. (authors)

  18. Robust pedestrian tracking and recognition from FLIR video: a unified approach via sparse coding.

    PubMed

    Li, Xin; Guo, Rui; Chen, Chao

    2014-01-01

    Sparse coding is an emerging method that has been successfully applied to both robust object tracking and recognition in the vision literature. In this paper, we propose to explore a sparse coding-based approach toward joint object tracking-and-recognition and explore its potential in the analysis of forward-looking infrared (FLIR) video to support nighttime machine vision systems. A key technical contribution of this work is to unify existing sparse coding-based approaches toward tracking and recognition under the same framework, so that they can benefit from each other in a closed-loop. On the one hand, tracking the same object through temporal frames allows us to achieve improved recognition performance through dynamical updating of template/dictionary and combining multiple recognition results; on the other hand, the recognition of individual objects facilitates the tracking of multiple objects (i.e., walking pedestrians), especially in the presence of occlusion within a crowded environment. We report experimental results on both the CASIAPedestrian Database and our own collected FLIR video database to demonstrate the effectiveness of the proposed joint tracking-and-recognition approach. PMID:24961216

  19. Effects of Narrative Script Advance Organizer Strategies Used to Introduce Video in the Foreign Language Classroom

    ERIC Educational Resources Information Center

    Ambard, Philip D.; Ambard, Linda K.

    2012-01-01

    The study compared participant comprehension of foreign language video content using two advance organizer (AO) strategies while exploring the benefits of AOs as proficiency increases. Participants were 50 advanced-beginner Spanish college students in three sections. Collaborative reading condition participants read a target language narrative…

  20. Recent advances in nondestructive evaluation made possible by novel uses of video systems

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.; Roth, Don J.

    1990-01-01

    Complex materials are being developed for use in future advanced aerospace systems. High temperature materials have been targeted as a major area of materials development. The development of composites consisting of ceramic matrix and ceramic fibers or whiskers is currently being aggressively pursued internationally. These new advanced materials are difficult and costly to produce; however, their low density and high operating temperature range are needed for the next generation of advanced aerospace systems. These materials represent a challenge to the nondestructive evaluation community. Video imaging techniques not only enhance the nondestructive evaluation, but they are also required for proper evaluation of these advanced materials. Specific research examples are given, highlighting the impact that video systems have had on the nondestructive evaluation of ceramics. An image processing technique for computerized determination of grain and pore size distribution functions from microstructural images is discussed. The uses of video and computer systems for displaying, evaluating, and interpreting ultrasonic image data are presented.

  1. Inter-bit prediction based on maximum likelihood estimate for distributed video coding

    NASA Astrophysics Data System (ADS)

    Klepko, Robert; Wang, Demin; Huchet, Grégory

    2010-01-01

    Distributed Video Coding (DVC) is an emerging video coding paradigm for the systems that require low complexity encoders supported by high complexity decoders. A typical real world application for a DVC system is mobile phones with video capture hardware that have a limited encoding capability supported by base-stations with a high decoding capability. Generally speaking, a DVC system operates by dividing a source image sequence into two streams, key frames and Wyner-Ziv (W) frames, with the key frames being used to represent the source plus an approximation to the W frames called S frames (where S stands for side information), while the W frames are used to correct the bit errors in the S frames. This paper presents an effective algorithm to reduce the bit errors in the side information of a DVC system. The algorithm is based on the maximum likelihood estimation to help predict future bits to be decoded. The reduction in bit errors in turn reduces the number of parity bits needed for error correction. Thus, a higher coding efficiency is achieved since fewer parity bits need to be transmitted from the encoder to the decoder. The algorithm is called inter-bit prediction because it predicts the bit-plane to be decoded from previously decoded bit-planes, one bitplane at a time, starting from the most significant bit-plane. Results provided from experiments using real-world image sequences show that the inter-bit prediction algorithm does indeed reduce the bit rate by up to 13% for our test sequences. This bit rate reduction corresponds to a PSNR gain of about 1.6 dB for the W frames.

  2. Advanced technology development for image gathering, coding, and processing

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.

    1990-01-01

    Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.

  3. Fast mode decision for multiview video coding based on depth maps

    NASA Astrophysics Data System (ADS)

    Cernigliaro, Gianluca; Jaureguizar, Fernando; Ortega, Antonio; Cabrera, Julián; García, Narciso

    2009-01-01

    A new fast mode decision (FMD) algorithm for multi-view video coding (MVC) is presented. One of the multiple views is encoded based on traditional methods, which provides a mode decision (MD) map, while encoding of the other views is based on the analysis of the homogeneity of the depth map. This approach reduces the burden of the rate-distortion (RD) motion analysis based on the availability of a depth map, which is assumed to be provided by the acquisition process. Although there is a slight decrease of performance in rate-distortion terms, there is a significant reduction in computational cost.

  4. Depth-based representations: Which coding format for 3D video broadcast applications?

    NASA Astrophysics Data System (ADS)

    Kerbiriou, Paul; Boisson, Guillaume; Sidibé, Korian; Huynh-Thu, Quan

    2011-03-01

    3D Video (3DV) delivery standardization is currently ongoing in MPEG. Now time is to choose 3DV data representation format. What is at stake is the final quality for end-users, i.e. synthesized views' visual quality. We focus on two major rival depth-based formats, namely Multiview Video plus Depth (MVD) and Layered Depth Video (LDV). MVD can be considered as the basic depth-based 3DV format, generated by disparity estimation from multiview sequences. LDV is more sophisticated, with the compaction of multiview data into color- and depth-occlusions layers. We compare final views quality using MVD2 and LDV (both containing two color channels plus two depth components) coded with MVC at various compression ratios. Depending on the format, the appropriate synthesis process is performed to generate final stereoscopic pairs. Comparisons are provided in terms of SSIM and PSNR with respect to original views and to synthesized references (obtained without compression). Eventually, LDV outperforms significantly MVD when using state-of-the-art reference synthesis algorithms. Occlusions management before encoding is advantageous in comparison with handling redundant signals at decoder side. Besides, we observe that depth quantization does not induce much loss on the final view quality until a significant degradation level. Improvements in disparity estimation and view synthesis algorithms are therefore still expected during the remaining standardization steps.

  5. Code qualification of structural materials for AFCI advanced recycling reactors.

    SciTech Connect

    Natesan, K.; Li, M.; Majumdar, S.; Nanstad, R.K.; Sham, T.-L.

    2012-05-31

    This report summarizes the further findings from the assessments of current status and future needs in code qualification and licensing of reference structural materials and new advanced alloys for advanced recycling reactors (ARRs) in support of Advanced Fuel Cycle Initiative (AFCI). The work is a combined effort between Argonne National Laboratory (ANL) and Oak Ridge National Laboratory (ORNL) with ANL as the technical lead, as part of Advanced Structural Materials Program for AFCI Reactor Campaign. The report is the second deliverable in FY08 (M505011401) under the work package 'Advanced Materials Code Qualification'. The overall objective of the Advanced Materials Code Qualification project is to evaluate key requirements for the ASME Code qualification and the Nuclear Regulatory Commission (NRC) approval of structural materials in support of the design and licensing of the ARR. Advanced materials are a critical element in the development of sodium reactor technologies. Enhanced materials performance not only improves safety margins and provides design flexibility, but also is essential for the economics of future advanced sodium reactors. Code qualification and licensing of advanced materials are prominent needs for developing and implementing advanced sodium reactor technologies. Nuclear structural component design in the U.S. must comply with the ASME Boiler and Pressure Vessel Code Section III (Rules for Construction of Nuclear Facility Components) and the NRC grants the operational license. As the ARR will operate at higher temperatures than the current light water reactors (LWRs), the design of elevated-temperature components must comply with ASME Subsection NH (Class 1 Components in Elevated Temperature Service). However, the NRC has not approved the use of Subsection NH for reactor components, and this puts additional burdens on materials qualification of the ARR. In the past licensing review for the Clinch River Breeder Reactor Project (CRBRP) and the

  6. The future of 3D and video coding in mobile and the internet

    NASA Astrophysics Data System (ADS)

    Bivolarski, Lazar

    2013-09-01

    The current Internet success has already changed our social and economic world and is still continuing to revolutionize the information exchange. The exponential increase of amount and types of data that is currently exchanged on the Internet represents significant challenge for the design of future architectures and solutions. This paper reviews the current status and trends in the design of solutions and research activities in the future Internet from point of view of managing the growth of bandwidth requirements and complexity of the multimedia that is being created and shared. Outlines the challenges that are present before the video coding and approaches to the design of standardized media formats and protocols while considering the expected convergence of multimedia formats and exchange interfaces. The rapid growth of connected mobile devices adds to the current and the future challenges in combination with the expected, in near future, arrival of multitude of connected devices. The new Internet technologies connecting the Internet of Things with wireless visual sensor networks and 3D virtual worlds requires conceptually new approaches of media content handling from acquisition to presentation in the 3D Media Internet. Accounting for the entire transmission system properties and enabling adaptation in real-time to context and content throughout the media proceeding path will be paramount in enabling the new media architectures as well as the new applications and services. The common video coding formats will need to be conceptually redesigned to allow for the implementation of the necessary 3D Media Internet features.

  7. Interactive Video Coding and Transmission over Heterogeneous Wired-to-Wireless IP Networks Using an Edge Proxy

    NASA Astrophysics Data System (ADS)

    Pei, Yong; Modestino, James W.

    2004-12-01

    Digital video delivered over wired-to-wireless networks is expected to suffer quality degradation from both packet loss and bit errors in the payload. In this paper, the quality degradation due to packet loss and bit errors in the payload are quantitatively evaluated and their effects are assessed. We propose the use of a concatenated forward error correction (FEC) coding scheme employing Reed-Solomon (RS) codes and rate-compatible punctured convolutional (RCPC) codes to protect the video data from packet loss and bit errors, respectively. Furthermore, the performance of a joint source-channel coding (JSCC) approach employing this concatenated FEC coding scheme for video transmission is studied. Finally, we describe an improved end-to-end architecture using an edge proxy in a mobile support station to implement differential error protection for the corresponding channel impairments expected on the two networks. Results indicate that with an appropriate JSCC approach and the use of an edge proxy, FEC-based error-control techniques together with passive error-recovery techniques can significantly improve the effective video throughput and lead to acceptable video delivery quality over time-varying heterogeneous wired-to-wireless IP networks.

  8. Improved orthogonal frequency division multiplexing communications through advanced coding

    NASA Astrophysics Data System (ADS)

    Westra, Jeffrey; Patti, John

    2005-08-01

    Orthogonal Frequency Division Multiplexing (OFDM) is a communications technique that transmits a signal over multiple, evenly spaced, discrete frequency bands. OFDM offers some advantages over traditional, single-carrier modulation techniques, such as increased immunity to inter-symbol interference. For this reason OFDM is an attractive candidate for sensor network application; it has already been included in several standards, including Digital Audio Broadcast (DAB); digital television standards in Europe, Japan and Australia; asymmetric digital subscriber line (ASDL); and wireless local area networks (WLAN), specifically IEEE 802.11a. Many of these applications currently make use of a standard convolutional code with Viterbi decoding to perform forward error correction (FEC). Replacing such convolutional codes with advanced coding techniques using iterative decoding, such as Turbo codes, can substantially improve the performance of the OFDM communications link. This paper demonstrates such improvements using the 802.11a wireless LAN standard.

  9. Advances in Parallel Electromagnetic Codes for Accelerator Science and Development

    SciTech Connect

    Ko, Kwok; Candel, Arno; Ge, Lixin; Kabel, Andreas; Lee, Rich; Li, Zenghai; Ng, Cho; Rawat, Vineet; Schussman, Greg; Xiao, Liling; /SLAC

    2011-02-07

    Over a decade of concerted effort in code development for accelerator applications has resulted in a new set of electromagnetic codes which are based on higher-order finite elements for superior geometry fidelity and better solution accuracy. SLAC's ACE3P code suite is designed to harness the power of massively parallel computers to tackle large complex problems with the increased memory and solve them at greater speed. The US DOE supports the computational science R&D under the SciDAC project to improve the scalability of ACE3P, and provides the high performance computing resources needed for the applications. This paper summarizes the advances in the ACE3P set of codes, explains the capabilities of the modules, and presents results from selected applications covering a range of problems in accelerator science and development important to the Office of Science.

  10. H.264/AVC intra-only coding (iAVC) techniques for video over wireless networks

    NASA Astrophysics Data System (ADS)

    Yang, Ming; Trifas, Monica; Xiong, Guolun; Rogers, Joshua

    2009-02-01

    The requirement to transmit video data over unreliable wireless networks (with the possibility of packet loss) is anticipated in the foreseeable future. Significant compression ratio and error resilience are both needed for complex applications including tele-operated robotics, vehicle-mounted cameras, sensor network, etc. Block-matching based inter-frame coding techniques, including MPEG-4 and H.264/AVC, do not perform well in these scenarios due to error propagation between frames. Many wireless applications often use intra-only coding technologies such as Motion-JPEG, which exhibit better recovery from network data loss at the price of higher data rates. In order to address these research issues, an intra-only coding scheme of H.264/AVC (iAVC) is proposed. In this approach, each frame is coded independently as an I-frame. Frame copy is applied to compensate for packet loss. This approach is a good balance between compression performance and error resilience. It achieves compression performance comparable to Motion- JPEG2000 (MJ2), with lower complexity. Error resilience similar to Motion-JPEG (MJ) will also be accomplished. Since the intra-frame prediction with iAVC is strictly confined within the range of a slice, memory usage is also extremely low. Low computational complexity and memory usage are very crucial to mobile stations and devices in wireless network.

  11. Quantization and psychoacoustic model in audio coding in advanced audio coding

    NASA Astrophysics Data System (ADS)

    Brzuchalski, Grzegorz

    2011-10-01

    This paper presents complete optimized architecture of Advanced Audio Coder quantization with Huffman coding. After that psychoacoustic model theory is presented and few algorithms described: standard Two Loop Search, its modifications, Genetic, Just Noticeable Level Difference, Trellis-Based and its modification: Cascaded Trellis-Based Algorithm.

  12. Low-cost multi-hypothesis motion compensation for video coding

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Dong, Shengfu; Wang, Ronggang; Wang, Zhenyu; Ma, Siwei; Wang, Wenmin; Gao, Wen

    2014-02-01

    In conventional motion compensation, prediction block is related only with one motion vector for P frame. Multihypothesis motion compensation (MHMC) is proposed to improve the prediction performance of conventional motion compensation. However, multiple motion vectors have to be searched and coded for MHMC. In this paper, we propose a new low-cost multi-hypothesis motion compensation (LMHMC) scheme. In LMHMC, a block can be predicted from multiple-hypothesis with only one motion vector to be searched and coded into bit-stream, other motion vectors are predicted from motion vectors of neighboring blocks, and so both the encoding complexity and bit-rate of MHMC can be saved by our proposed LMHMC. By adding LMHMC as an additional mode in MPEG internet video coding (IVC) platform, the B-D rate saving is up to 10%, and the average B-D rate saving is close to 5% in Low Delay configure. We also compare the performance between MHMC and LMHMC in IVC platform, the performance of MHMC is improved about 2% on average by LMHMC.

  13. Smoothed reference inter-layer texture prediction for bit depth scalable video coding

    NASA Astrophysics Data System (ADS)

    Ma, Zhan; Luo, Jiancong; Yin, Peng; Gomila, Cristina; Wang, Yao

    2010-01-01

    We present a smoothed reference inter-layer texture prediction mode for bit depth scalability based on the Scalable Video Coding extension of the H.264/MPEG-4 AVC standard. In our approach, the base layer encodes an 8-bit signal that can be decoded by any existing H.264/MPEG-4 AVC decoder and the enhancement layer encodes a higher bit depth signal (e.g. 10/12-bit) which requires a bit depth scalable decoder. The approach presented uses base layer motion vectors to conduct motion compensation upon enhancement layer reference frames. Then, the motion compensated block is tone mapped and summed with the co-located base layer residue block prior to being inverse tone mapped to obtain a smoothed reference predictor. In addition to the original inter-/intra-layer prediction modes, the smoothed reference prediction mode enables inter-layer texture prediction for blocks with inter-coded co-located block. The proposed method is designed to improve the coding efficiency for sequences with non-linear tone mapping, in which case we have gains up to 0.4dB over the CGS-based BDS framework.

  14. The Role of Collaboration and Feedback in Advancing Student Learning in Media Literacy and Video Production

    ERIC Educational Resources Information Center

    Casinghino, Carl

    2015-01-01

    Teaching advanced video production is an art that requires great sensitivity to the process of providing feedback that helps students to learn and grow. Some students experience difficulty in developing narrative sequences or cause-and-effect strings of motion picture sequences. But when students learn to work collaboratively through the revision…

  15. Evaluating the Use of Captioned Video Materials in Advanced Foreign Language Learning.

    ERIC Educational Resources Information Center

    Garza, Thomas J.

    1991-01-01

    Reports on the results of research conducted to evaluate the use of captioning (on-screen target language subtitles) as a pedagogical aid to facilitate the use of authentic video materials in the foreign language classroom, especially in advanced or upper-level courses. A description of the research methodology, implementation, data analysis, and…

  16. ADVANCED ELECTRIC AND MAGNETIC MATERIAL MODELS FOR FDTD ELECTROMAGNETIC CODES

    SciTech Connect

    Poole, B R; Nelson, S D; Langdon, S

    2005-05-05

    The modeling of dielectric and magnetic materials in the time domain is required for pulse power applications, pulsed induction accelerators, and advanced transmission lines. For example, most induction accelerator modules require the use of magnetic materials to provide adequate Volt-sec during the acceleration pulse. These models require hysteresis and saturation to simulate the saturation wavefront in a multipulse environment. In high voltage transmission line applications such as shock or soliton lines the dielectric is operating in a highly nonlinear regime, which require nonlinear models. Simple 1-D models are developed for fast parameterization of transmission line structures. In the case of nonlinear dielectrics, a simple analytic model describing the permittivity in terms of electric field is used in a 3-D finite difference time domain code (FDTD). In the case of magnetic materials, both rate independent and rate dependent Hodgdon magnetic material models have been implemented into 3-D FDTD codes and 1-D codes.

  17. User's manual: Subsonic/supersonic advanced panel pilot code

    NASA Technical Reports Server (NTRS)

    Moran, J.; Tinoco, E. N.; Johnson, F. T.

    1978-01-01

    Sufficient instructions for running the subsonic/supersonic advanced panel pilot code were developed. This software was developed as a vehicle for numerical experimentation and it should not be construed to represent a finished production program. The pilot code is based on a higher order panel method using linearly varying source and quadratically varying doublet distributions for computing both linearized supersonic and subsonic flow over arbitrary wings and bodies. This user's manual contains complete input and output descriptions. A brief description of the method is given as well as practical instructions for proper configurations modeling. Computed results are also included to demonstrate some of the capabilities of the pilot code. The computer program is written in FORTRAN IV for the SCOPE 3.4.4 operations system of the Ames CDC 7600 computer. The program uses overlay structure and thirteen disk files, and it requires approximately 132000 (Octal) central memory words.

  18. Analysis of prediction algorithms for residual compression in a lossy to lossless scalable video coding system based on HEVC

    NASA Astrophysics Data System (ADS)

    Heindel, Andreas; Wige, Eugen; Kaup, André

    2014-09-01

    Lossless image and video compression is required in many professional applications. However, lossless coding results in a high data rate, which leads to a long wait for the user when the channel capacity is limited. To overcome this problem, scalable lossless coding is an elegant solution. It provides a fast accessible preview by a lossy compressed base layer, which can be refined to a lossless output when the enhancement layer is received. Therefore, this paper presents a lossy to lossless scalable coding system where the enhancement layer is coded by means of intra prediction and entropy coding. Several algorithms are evaluated for the prediction step in this paper. It turned out that Sample-based Weighted Prediction is a reasonable choice for usual consumer video sequences and the Median Edge Detection algorithm is better suited for medical content from computed tomography. For both types of sequences the efficiency may be further improved by the much more complex Edge-Directed Prediction algorithm. In the best case, in total only about 2.7% additional data rate has to be invested for scalable coding compared to single-layer JPEG-LS compression for usual consumer video sequences. For the case of the medical sequences scalable coding is even more efficient than JPEG-LS compression for certain values of QP.

  19. Impact of packet losses in scalable 3D holoscopic video coding

    NASA Astrophysics Data System (ADS)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2014-05-01

    Holoscopic imaging became a prospective glassless 3D technology to provide more natural 3D viewing experiences to the end user. Additionally, holoscopic systems also allow new post-production degrees of freedom, such as controlling the plane of focus or the viewing angle presented to the user. However, to successfully introduce this technology into the consumer market, a display scalable coding approach is essential to achieve backward compatibility with legacy 2D and 3D displays. Moreover, to effectively transmit 3D holoscopic content over error-prone networks, e.g., wireless networks or the Internet, error resilience techniques are required to mitigate the impact of data impairments in the user quality perception. Therefore, it is essential to deeply understand the impact of packet losses in terms of decoding video quality for the specific case of 3D holoscopic content, notably when a scalable approach is used. In this context, this paper studies the impact of packet losses when using a three-layer display scalable 3D holoscopic video coding architecture previously proposed, where each layer represents a different level of display scalability (i.e., L0 - 2D, L1 - stereo or multiview, and L2 - full 3D holoscopic). For this, a simple error concealment algorithm is used, which makes use of inter-layer redundancy between multiview and 3D holoscopic content and the inherent correlation of the 3D holoscopic content to estimate lost data. Furthermore, a study of the influence of 2D views generation parameters used in lower layers on the performance of the used error concealment algorithm is also presented.

  20. Delicate visual artifacts of advanced digital video processing algorithms

    NASA Astrophysics Data System (ADS)

    Nicolas, Marina M.; Lebowsky, Fritz

    2005-03-01

    With the incoming of digital TV, sophisticated video processing algorithms have been developed to improve the rendering of motion or colors. However, the perceived subjective quality of these new systems sometimes happens to be in conflict with the objective measurable improvement we expect to get. In this presentation, we show examples where algorithms should visually improve the skin tone rendering of decoded pictures under normal conditions, but surprisingly fail, when the quality of mpeg encoding drops below a just noticeable threshold. In particular, we demonstrate that simple objective criteria used for the optimization, such as SAD, PSNR or histogram sometimes fail, partly because they are defined on a global scale, ignoring local characteristics of the picture content. We then integrate a simple human visual model to measure potential artifacts with regard to spatial and temporal variations of the objects' characteristics. Tuning some of the model's parameters allows correlating the perceived objective quality with compression metrics of various encoders. We show the evolution of our reference parameters in respect to the compression ratios. Finally, using the output of the model, we can control the parameters of the skin tone algorithm to reach an improvement in overall system quality.

  1. Remote Bridge Deflection Measurement Using an Advanced Video Deflectometer and Actively Illuminated LED Targets.

    PubMed

    Tian, Long; Pan, Bing

    2016-01-01

    An advanced video deflectometer using actively illuminated LED targets is proposed for remote, real-time measurement of bridge deflection. The system configuration, fundamental principles, and measuring procedures of the video deflectometer are first described. To address the challenge of remote and accurate deflection measurement of large engineering structures without being affected by ambient light, the novel idea of active imaging, which combines high-brightness monochromatic LED targets with coupled bandpass filter imaging, is introduced. Then, to examine the measurement accuracy of the proposed advanced video deflectometer in outdoor environments, vertical motions of an LED target with precisely-controlled translations were measured and compared with prescribed values. Finally, by tracking six LED targets mounted on the bridge, the developed video deflectometer was applied for field, remote, and multipoint deflection measurement of the Wuhan Yangtze River Bridge, one of the most prestigious and most publicized constructions in China, during its routine safety evaluation tests. Since the proposed video deflectometer using actively illuminated LED targets offers prominent merits of remote, contactless, real-time, and multipoint deflection measurement with strong robustness against ambient light changes, it has great potential in the routine safety evaluation of various bridges and other large-scale engineering structures. PMID:27563901

  2. ALOHA: an Advanced LOwer Hybrid Antenna coupling code

    NASA Astrophysics Data System (ADS)

    Hillairet, J.; Voyer, D.; Ekedahl, A.; Goniche, M.; Kazda, M.; Meneghini, O.; Milanesio, D.; Preynas, M.

    2010-12-01

    The Advanced LOwer Hybrid Antenna (ALOHA) code, has been developed to improve the modelling of the coupling of lower hybrid (LH) waves from the antenna to a cold inhomogeneous plasma while keeping a fast tool. In contrast to the previous code Slow Wave ANtenna (SWAN) (that only described the interaction of the slow wave between the waveguides and the plasma in a 1D model), the equations are now solved in 2D including the contribution of both the slow and fast waves, with a low computational cost. This approach is completed either by a full-wave computation of the antenna that takes into account its detailed geometry or by a mode-matching code dedicated to multijunctions modelling, which is convenient in preliminary design phases. Moreover, ALOHA can treat more realistic scrape-off layers in front of the antenna, by using a two-layer electron density profile. The ALOHA code has been compared with experimental results from Tore Supra LH antennas of different geometries, as well as benchmarked against other LH coupling codes, with very good results. Once validated, ALOHA has been used as a support for the design of COMPASS and ITER LH antennas and has shown to be a fast and reliable tool for LH antenna design.

  3. Reliability of Pre-Service Physical Education Teachers' Coding of Teaching Videos Using Studiocode[R] Analysis Software

    ERIC Educational Resources Information Center

    Prusak, Keven; Dye, Brigham; Graham, Charles; Graser, Susan

    2010-01-01

    This study examines the coding reliability and accuracy of pre-service teachers in a teaching methods class using digital video (DV)-based teaching episodes and Studiocode analysis software. Student self-analysis of DV footage may offer a high tech solution to common shortfalls of traditional systematic observation and reflection practices by…

  4. Beam Optics Analysis - An Advanced 3D Trajectory Code

    SciTech Connect

    Ives, R. Lawrence; Bui, Thuc; Vogler, William; Neilson, Jeff; Read, Mike; Shephard, Mark; Bauer, Andrew; Datta, Dibyendu; Beal, Mark

    2006-01-03

    Calabazas Creek Research, Inc. has completed initial development of an advanced, 3D program for modeling electron trajectories in electromagnetic fields. The code is being used to design complex guns and collectors. Beam Optics Analysis (BOA) is a fully relativistic, charged particle code using adaptive, finite element meshing. Geometrical input is imported from CAD programs generating ACIS-formatted files. Parametric data is inputted using an intuitive, graphical user interface (GUI), which also provides control of convergence, accuracy, and post processing. The program includes a magnetic field solver, and magnetic information can be imported from Maxwell 2D/3D and other programs. The program supports thermionic emission and injected beams. Secondary electron emission is also supported, including multiple generations. Work on field emission is in progress as well as implementation of computer optimization of both the geometry and operating parameters. The principle features of the program and its capabilities are presented.

  5. Beam Optics Analysis — An Advanced 3D Trajectory Code

    NASA Astrophysics Data System (ADS)

    Ives, R. Lawrence; Bui, Thuc; Vogler, William; Neilson, Jeff; Read, Mike; Shephard, Mark; Bauer, Andrew; Datta, Dibyendu; Beal, Mark

    2006-01-01

    Calabazas Creek Research, Inc. has completed initial development of an advanced, 3D program for modeling electron trajectories in electromagnetic fields. The code is being used to design complex guns and collectors. Beam Optics Analysis (BOA) is a fully relativistic, charged particle code using adaptive, finite element meshing. Geometrical input is imported from CAD programs generating ACIS-formatted files. Parametric data is inputted using an intuitive, graphical user interface (GUI), which also provides control of convergence, accuracy, and post processing. The program includes a magnetic field solver, and magnetic information can be imported from Maxwell 2D/3D and other programs. The program supports thermionic emission and injected beams. Secondary electron emission is also supported, including multiple generations. Work on field emission is in progress as well as implementation of computer optimization of both the geometry and operating parameters. The principle features of the program and its capabilities are presented.

  6. Advancement of liquefaction assessment in Chinese building codes

    NASA Astrophysics Data System (ADS)

    Sun, H.; Liu, F.; Jiang, M.

    2015-09-01

    China has suffered extensive liquefaction hazards in destructive earthquakes. The post-earthquake reconnaissance effort in the country largely advances the methodology of liquefaction assessment distinct from other countries. This paper reviews the evolution of the specifications regarding liquefaction assessment in the seismic design building code of mainland China, which first appeared in 1974, came into shape in 1989, and received major amendments in 2001 and 2010 as a result of accumulated knowledge on liquefaction phenomenon. The current version of the code requires a detailed assessment of liquefaction based on in situ test results if liquefaction concern cannot be eliminated by a preliminary assessment based on descriptive information with respect to site characterization. In addition, a liquefaction index is evaluated to recognize liquefaction severity, and to choose the most appropriate engineering measures for liquefaction mitigation at a site being considered.

  7. Implementation of scalable video coding deblocking filter from high-level SystemC description

    NASA Astrophysics Data System (ADS)

    Carballo, Pedro P.; Espino, Omar; Neris, Romén.; Hernández-Fernández, Pedro; Szydzik, Tomasz M.; Núñez, Antonio

    2013-05-01

    This paper describes key concepts in the design and implementation of a deblocking filter (DF) for a H.264/SVC video decoder. The DF supports QCIF and CIF video formats with temporal and spatial scalability. The design flow starts from a SystemC functional model and has been refined using high-level synthesis methodology to RTL microarchitecture. The process is guided with performance measurements (latency, cycle time, power, resource utilization) with the objective of assuring the quality of results of the final system. The functional model of the DF is created in an incremental way from the AVC DF model using OpenSVC source code as reference. The design flow continues with the logic synthesis and the implementation on the FPGA using various strategies. The final implementation is chosen among the implementations that meet the timing constraints. The DF is capable to run at 100 MHz, and macroblocks are processed in 6,500 clock cycles for a throughput of 130 fps for QCIF format and 37 fps for CIF format. The proposed architecture for the complete H.264/SVC decoder is composed of an OMAP 3530 SOC (ARM Cortex-A8 GPP + DSP) and the FPGA Virtex-5 acting as a coprocessor for DF implementation. The DF is connected to the OMAP SOC using the GPMC interface. A validation platform has been developed using the embedded PowerPC processor in the FPGA, composing a SoC that integrates the frame generation and visualization in a TFT screen. The FPGA implements both the DF core and a GPMC slave core. Both cores are connected to the PowerPC440 embedded processor using LocalLink interfaces. The FPGA also contains a local memory capable of storing information necessary to filter a complete frame and to store a decoded picture frame. The complete system is implemented in a Virtex5 FX70T device.

  8. Fast Mode Decision in the HEVC Video Coding Standard by Exploiting Region with Dominated Motion and Saliency Features

    PubMed Central

    Podder, Pallab Kanti; Paul, Manoranjan; Murshed, Manzur

    2016-01-01

    The emerging High Efficiency Video Coding (HEVC) standard introduces a number of innovative and powerful coding tools to acquire better compression efficiency compared to its predecessor H.264. The encoding time complexities have also increased multiple times that is not suitable for realtime video coding applications. To address this limitation, this paper employs a novel coding strategy to reduce the time complexity in HEVC encoder by efficient selection of appropriate block-partitioning modes based on human visual features (HVF). The HVF in the proposed technique comprise with human visual attention modelling-based saliency feature and phase correlation-based motion features. The features are innovatively combined through a fusion process by developing a content-based adaptive weighted cost function to determine the region with dominated motion/saliency (RDMS)- based binary pattern for the current block. The generated binary pattern is then compared with a codebook of predefined binary pattern templates aligned to the HEVC recommended block-paritioning to estimate a subset of inter-prediction modes. Without exhaustive exploration of all modes available in the HEVC standard, only the selected subset of modes are motion estimated and motion compensated for a particular coding unit. The experimental evaluation reveals that the proposed technique notably down-scales the average computational time of the latest HEVC reference encoder by 34% while providing similar rate-distortion (RD) performance for a wide range of video sequences. PMID:26963813

  9. On the efficiency of image completion methods for intra prediction in video coding with large block structures

    NASA Astrophysics Data System (ADS)

    Doshkov, Dimitar; Jottrand, Oscar; Wiegand, Thomas; Ndjiki-Nya, Patrick

    2013-02-01

    Intra prediction is a fundamental tool in video coding with hybrid block-based architecture. Recent investigations have shown that one of the most beneficial elements for a higher compression performance in high-resolution videos is the incorporation of larger block structures. Thus in this work, we investigate the performance of novel intra prediction modes based on different image completion techniques in a new video coding scheme with large block structures. Image completion methods exploit the fact that high frequency image regions yield high coding costs when using classical H.264/AVC prediction modes. This problem is tackled by investigating the incorporation of several intra predictors using the concept of Laplace partial differential equation (PDE), Least Square (LS) based linear prediction and the Auto Regressive model. A major aspect of this article is the evaluation of the coding performance in a qualitative (i.e. coding efficiency) manner. Experimental results show significant improvements in compression (up to 7.41 %) by integrating the LS-based linear intra prediction.

  10. Fast Huffman encoding algorithms in MPEG-4 advanced audio coding

    NASA Astrophysics Data System (ADS)

    Brzuchalski, Grzegorz

    2014-11-01

    This paper addresses the optimisation problem of Huffman encoding in MPEG-4 Advanced Audio Coding stan- dard. At first, the Huffman encoding problem and the need of encoding two side info parameters scale factor and Huffman codebook are presented. Next, Two Loop Search, Maximum Noise Mask Ratio and Trellis Based algorithms of bit allocation are briefly described. Further, Huffman encoding optimisation are shown. New methods try to check and change scale factor bands as little as possible to estimate bitrate cost or its change. Finally, the complexity of old and new methods is calculated, compared and measured time of encoding is given.

  11. Advanced video extensometer for non-contact, real-time, high-accuracy strain measurement.

    PubMed

    Pan, Bing; Tian, Long

    2016-08-22

    We developed an advanced video extensometer for non-contact, real-time, high-accuracy strain measurement in material testing. In the established video extensometer, a "near perfect and ultra-stable" imaging system, combining the idea of active imaging with a high-quality bilateral telecentric lens, is constructed to acquire high-fidelity video images of the test sample surface, which is invariant to ambient lighting changes and small out-of-plane motions occurred between the object surface and image plane. In addition, an efficient and accurate inverse compositional Gauss-Newton algorithm incorporating a temporal initial guess transfer scheme and a high-accuracy interpolation method is employed to achieve real-time, high-accuracy displacement tracking with negligible bias error. Tensile tests of an aluminum sample and a carbon fiber filament sample were performed to demonstrate the efficiency, repeatability and accuracy of the developed advanced video extensometer. The results indicate that longitudinal and transversal strains can be estimated and plotted at a rate of 117 fps and with a maximum strain error less than 30 microstrains. PMID:27557188

  12. The Next Generation Advanced Video Guidance Sensor: Flight Heritage and Current Development

    NASA Astrophysics Data System (ADS)

    Howard, Richard T.; Bryan, Thomas C.

    2009-03-01

    The Next Generation Advanced Video Guidance Sensor (NGAVGS) is the latest in a line of sensors that have flown four times in the last 10 years. The NGAVGS has been under development for the last two years as a long-range proximity operations and docking sensor for use in an Automated Rendezvous and Docking (AR&D) system. The first autonomous rendezvous and docking in the history of the U.S. Space Program was successfully accomplished by Orbital Express, using the Advanced Video Guidance Sensor (AVGS) as the primary docking sensor. That flight proved that the United States now has a mature and flight proven sensor technology for supporting Crew Exploration Vehicles (CEV) and Commercial Orbital Transport Systems (COTS) Automated Rendezvous and Docking (AR&D). NASA video sensors have worked well in the past: the AVGS used on the Demonstration of Autonomous Rendezvous Technology (DART) mission operated successfully in "spot mode" out to 2 km, and the first generation rendezvous and docking sensor, the Video Guidance Sensor (VGS), was developed and successfully flown on Space Shuttle flights in 1997 and 1998.

  13. Hole-filling map-based coding unit size decision for dependent views in three-dimensional high-efficiency video coding

    NASA Astrophysics Data System (ADS)

    Guo, Lilin; Zhou, Lunan; Tian, Xiang; Chen, Yaowu

    2016-05-01

    The three-dimensional high-efficiency video coding (3-D-HEVC) is an emerging compression standard for multiview video plus depth data. In addition to the quad-tree coding structure inherited from HEVC, some tools are integrated, which significantly improve the coding efficiency but also result in remarkably high computational complexity. We propose a fast coding unit (CU) size decision algorithm for both depth and texture components in dependent views, where hole-filling maps created through view synthesis are utilized. First, after coding the base view, warp it onto each dependent view via depth image based rendering, during which hole-filling maps are generated. Then for depth in dependent views, CU splitting can be early terminated considering the disoccluded information from hole-filling maps; for texture in dependent views, combining the disoccluded information and the interview correlations, the CU partitioning process can also be accelerated. Experimental results show that the proposed algorithm can achieve on average 54.3% time reduction, with a negligible Bjøntegaard delta bitrate increase of 0.15% on synthesized views, and a 0.05% increase on all the coded plus synthesized views compared with the original encoding scheme in a 3-D-HEVC test model.

  14. Real-time high-resolution downsampling algorithm on many-core processor for spatially scalable video coding

    NASA Astrophysics Data System (ADS)

    Buhari, Adamu Muhammad; Ling, Huo-Chong; Baskaran, Vishnu Monn; Wong, KokSheik

    2015-01-01

    The progression toward spatially scalable video coding (SVC) solutions for ubiquitous endpoint systems introduces challenges to sustain real-time frame rates in downsampling high-resolution videos into multiple layers. In addressing these challenges, we put forward a hardware accelerated downsampling algorithm on a parallel computing platform. First, we investigate the principal architecture of a serial downsampling algorithm in the Joint-Scalable-Video-Model reference software to identify the performance limitations for spatially SVC. Then, a parallel multicore-based downsampling algorithm is studied as a benchmark. Experimental results for this algorithm using an 8-core processor exhibit performance speedup of 5.25× against the serial algorithm in downsampling a quantum extended graphics array at 1536p video resolution into three lower resolution layers (i.e., Full-HD at 1080p, HD at 720p, and Quarter-HD at 540p). However, the achieved speedup here does not translate into the minimum required frame rate of 15 frames per second (fps) for real-time video processing. To improve the speedup, a many-core based downsampling algorithm using the compute unified device architecture parallel computing platform is proposed. The proposed algorithm increases the performance speedup to 26.14× against the serial algorithm. Crucially, the proposed algorithm exceeds the target frame rate of 15 fps, which in turn is advantageous to the overall performance of the video encoding process.

  15. Real-time compressed video ultrasound using the Advanced Communications Technology Satellite

    NASA Astrophysics Data System (ADS)

    Stewart, Brent K.; Carter, Stephen J.; Cook, Jay F.; Abbe, Brian S.; Pinck, Deborah; Rowberg, Alan H.

    1996-05-01

    The authors have an in-kind grant from NASA to investigate the application of the Advanced Communications Technology Satellite (ACTS) to teleradiology and telemedicine using the Jet Propulsion Laboratory developed ACTS Mobile Terminal (AMT) uplink. We have recently completed three series of experiments with the ACTS/AMT. Although these experiments were multifaceted, the primary objective was the determination and evaluation of transmitting real- time compressed ultrasound video imagery over the ACTS/AMT satellite link, a primary focus of the author's current ARPA Advanced Biomedical Technology contract. These experiments have demonstrated that real-time compressed ultrasound video imagery can be transmitted over multiple ISDN line bandwidth links with sufficient temporal, contrast and spatial resolution for clinical diagnosis of multiple disease and pathology states to provide subspecialty consultation and education at a distance.

  16. X-Ray Calibration Facility/Advanced Video Guidance Sensor Test

    NASA Technical Reports Server (NTRS)

    Johnston, N. A. S.; Howard, R. T.; Watson, D. W.

    2004-01-01

    The advanced video guidance sensor was tested in the X-Ray Calibration facility at Marshall Space Flight Center to establish performance during vacuum. Two sensors were tested and a timeline for each are presented. The sensor and test facility are discussed briefly. A new test stand was also developed. A table establishing sensor bias and spot size growth for several ranges is detailed along with testing anomalies.

  17. Video decision aids to assist with advance care planning: a systematic review and meta-analysis

    PubMed Central

    Jain, Ashu; Corriveau, Sophie; Quinn, Kathleen; Gardhouse, Amanda; Vegas, Daniel Brandt; You, John J

    2015-01-01

    Objective Advance care planning (ACP) can result in end-of-life care that is more congruent with patients’ values and preferences. There is increasing interest in video decision aids to assist with ACP. The objective of this study was to evaluate the impact of video decision aids on patients’ preferences regarding life-sustaining treatments (primary outcome). Design Systematic review and meta-analysis of randomised controlled trials. Data sources MEDLINE, EMBASE, PsycInfo, CINAHL, AMED and CENTRAL, between 1980 and February 2014, and correspondence with authors. Eligibility criteria for selecting studies Randomised controlled trials of adult patients that compared a video decision aid to a non-video-based intervention to assist with choices about use of life-sustaining treatments and reported at least one ACP-related outcome. Data extraction Reviewers worked independently and in pairs to screen potentially eligible articles, and to extract data regarding risk of bias, population, intervention, comparator and outcomes. Reviewers assessed quality of evidence (confidence in effect estimates) for each outcome using the Grading of Recommendations Assessment, Development and Evaluation framework. Results 10 trials enrolling 2220 patients were included. Low-quality evidence suggests that patients who use a video decision aid are less likely to indicate a preference for cardiopulmonary resuscitation (pooled risk ratio, 0.50 (95% CI 0.27 to 0.95); I2=65%). Moderate-quality evidence suggests that video decision aids result in greater knowledge related to ACP (standardised mean difference, 0.58 (95% CI 0.38 to 0.77); I2=0%). No study reported on the congruence of end-of-life treatments with patients’ wishes. No study evaluated the effect of video decision aids when integrated into clinical care. Conclusions Video decision aids may improve some ACP-related outcomes. Before recommending their use in clinical practice, more evidence is needed to confirm these findings and

  18. Automatic differentiation of advanced CFD codes for multidisciplinary design

    NASA Technical Reports Server (NTRS)

    Bischof, C.; Corliss, G.; Green, L.; Griewank, A.; Haigler, K.; Newman, P.

    1992-01-01

    Automated multidisciplinary design of aircraft and other flight vehicles requires the optimization of complex performance objectives with respect to a number of design parameters and constraints. The effect of these independent design variables on the system performance criteria can be quantified in terms of sensitivity derivatives which must be calculated and propagated by the individual discipline simulation codes. Typical advanced CFD analysis codes do not provide such derivatives as part of a flow solution; these derivatives are very expensive to obtain by divided (finite) differences from perturbed solutions. It is shown that sensitivity derivatives can be obtained accurately and efficiently using the ADIFOR source translator for automatic differentiation. In particular, it is demonstrated that the 3-D, thin-layer Navier-Stokes, multigrid flow solver called TLNS3D is amenable to automatic differentiation in the forward mode even with its implicit iterative solution algorithm and complex turbulence modeling. It is significant that by using computational differentiation, consistent discrete nongeometric sensitivity derivatives have been obtained from an aerodynamic 3-D CFD code in a relatively short time, e.g., O(man-week) not O(man-year).

  19. Advanced coding techniques for few mode transmission systems.

    PubMed

    Okonkwo, Chigo; van Uden, Roy; Chen, Haoshuo; de Waardt, Huug; Koonen, Ton

    2015-01-26

    We experimentally verify the advantage of employing advanced coding schemes such as space-time coding and 4 dimensional modulation formats to enhance the transmission performance of a 3-mode transmission system. The performance gain of space-time block codes for extending the optical signal-to-noise ratio tolerance in multiple-input multiple-output optical coherent spatial division multiplexing transmission systems with respect to single-mode transmission performance are evaluated. By exploiting the spatial diversity that few-mode-fibers offer, with respect to single mode fiber back-to-back performance, significant OSNR gains of 3.2, 4.1, 4.9, and 6.8 dB at the hard-decision forward error correcting limit are demonstrated for DP-QPSK 8, 16 and 32 QAM, respectively. Furthermore, by employing 4D constellations, 6 × 28Gbaud 128 set partitioned quadrature amplitude modulation is shown to outperform conventional 8 QAM transmission performance, whilst carrying an additional 0.5 bit/symbol. PMID:25835899

  20. Automatic differentiation of advanced CFD codes for multidisciplinary design

    SciTech Connect

    Bischof, C.; Corliss, G.; Griewank, A. ); Green, L.; Haigler, K.; Newman, P. . Langley Research Center)

    1992-01-01

    Automated multidisciplinary design of aircraft and other flight vehicles requires the optimization of complex performance objectives with respect to a number of design parameters and constraints. The effect of these independent design variables on the system performance criteria can be quantified in terms of sensitivity derivatives which must be calculated and propagated by the individual discipline simulation codes. Typical advanced CFD analysis codes do not provide such derivatives as part of a flow solution; these derivatives are very expensive to obtain by divided (finite) differences from perturbed solutions. It is shown here that sensitivity derivatives can be obtained accurately and efficiently using the ADIFOR source translator for automatic differentiation. In particular, it is demonstrated that the 3-D, thin-layer Navier-Stokes, multigrid flow solver called TLNS3D is amenable to automatic differentiation in the forward mode even with its implicit iterative solution algorithm and complex turbulence modeling. It is significant that using computational differentiation, consistent discrete nongeometric sensitivity derivatives have been obtained from an aerodynamic 3-D CFD code in a relatively short time, e.g. O(man-week) not O(man-year).

  1. Automatic differentiation of advanced CFD codes for multidisciplinary design

    SciTech Connect

    Bischof, C.; Corliss, G.; Griewank, A.; Green, L.; Haigler, K.; Newman, P.

    1992-12-31

    Automated multidisciplinary design of aircraft and other flight vehicles requires the optimization of complex performance objectives with respect to a number of design parameters and constraints. The effect of these independent design variables on the system performance criteria can be quantified in terms of sensitivity derivatives which must be calculated and propagated by the individual discipline simulation codes. Typical advanced CFD analysis codes do not provide such derivatives as part of a flow solution; these derivatives are very expensive to obtain by divided (finite) differences from perturbed solutions. It is shown here that sensitivity derivatives can be obtained accurately and efficiently using the ADIFOR source translator for automatic differentiation. In particular, it is demonstrated that the 3-D, thin-layer Navier-Stokes, multigrid flow solver called TLNS3D is amenable to automatic differentiation in the forward mode even with its implicit iterative solution algorithm and complex turbulence modeling. It is significant that using computational differentiation, consistent discrete nongeometric sensitivity derivatives have been obtained from an aerodynamic 3-D CFD code in a relatively short time, e.g. O(man-week) not O(man-year).

  2. Using game theory for perceptual tuned rate control algorithm in video coding

    NASA Astrophysics Data System (ADS)

    Luo, Jiancong; Ahmad, Ishfaq

    2005-03-01

    This paper proposes a game theoretical rate control technique for video compression. Using a cooperative gaming approach, which has been utilized in several branches of natural and social sciences because of its enormous potential for solving constrained optimization problems, we propose a dual-level scheme to optimize the perceptual quality while guaranteeing "fairness" in bit allocation among macroblocks. At the frame level, the algorithm allocates target bits to frames based on their coding complexity. At the macroblock level, the algorithm distributes bits to macroblocks by defining a bargaining game. Macroblocks play cooperatively to compete for shares of resources (bits) to optimize their quantization scales while considering the Human Visual System"s perceptual property. Since the whole frame is an entity perceived by viewers, macroblocks compete cooperatively under a global objective of achieving the best quality with the given bit constraint. The major advantage of the proposed approach is that the cooperative game leads to an optimal and fair bit allocation strategy based on the Nash Bargaining Solution. Another advantage is that it allows multi-objective optimization with multiple decision makers (macroblocks). The simulation results testify the algorithm"s ability to achieve accurate bit rate with good perceptual quality, and to maintain a stable buffer level.

  3. Stereoscopic Visual Attention-Based Regional Bit Allocation Optimization for Multiview Video Coding

    NASA Astrophysics Data System (ADS)

    Zhang, Yun; Jiang, Gangyi; Yu, Mei; Chen, Ken; Dai, Qionghai

    2010-12-01

    We propose a Stereoscopic Visual Attention- (SVA-) based regional bit allocation optimization for Multiview Video Coding (MVC) by the exploiting visual redundancies from human perceptions. We propose a novel SVA model, where multiple perceptual stimuli including depth, motion, intensity, color, and orientation contrast are utilized, to simulate the visual attention mechanisms of human visual system with stereoscopic perception. Then, a semantic region-of-interest (ROI) is extracted based on the saliency maps of SVA. Both objective and subjective evaluations of extracted ROIs indicated that the proposed SVA model based on ROI extraction scheme outperforms the schemes only using spatial or/and temporal visual attention clues. Finally, by using the extracted SVA-based ROIs, a regional bit allocation optimization scheme is presented to allocate more bits on SVA-based ROIs for high image quality and fewer bits on background regions for efficient compression purpose. Experimental results on MVC show that the proposed regional bit allocation algorithm can achieve over [InlineEquation not available: see fulltext.]% bit-rate saving while maintaining the subjective image quality. Meanwhile, the image quality of ROIs is improved by [InlineEquation not available: see fulltext.] dB at the cost of insensitive image quality degradation of the background image.

  4. Bitstream decoding processor for fast entropy decoding of variable length coding-based multiformat videos

    NASA Astrophysics Data System (ADS)

    Jo, Hyunho; Sim, Donggyu

    2014-06-01

    We present a bitstream decoding processor for entropy decoding of variable length coding-based multiformat videos. Since most of the computational complexity of entropy decoders comes from bitstream accesses and table look-up process, the developed bitstream processing unit (BsPU) has several designated instructions to access bitstreams and to minimize branch operations in the table look-up process. In addition, the instruction for bitstream access has the capability to remove emulation prevention bytes (EPBs) of H.264/AVC without initial delay, repeated memory accesses, and additional buffer. Experimental results show that the proposed method for EPB removal achieves a speed-up of 1.23 times compared to the conventional EPB removal method. In addition, the BsPU achieves speed-ups of 5.6 and 3.5 times in entropy decoding of H.264/AVC and MPEG-4 Visual bitstreams, respectively, compared to an existing processor without designated instructions and a new table mapping algorithm. The BsPU is implemented on a Xilinx Virtex5 LX330 field-programmable gate array. The MPEG-4 Visual (ASP, Level 5) and H.264/AVC (Main Profile, Level 4) are processed using the developed BsPU with a core clock speed of under 250 MHz in real time.

  5. MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.

    PubMed

    Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño

    2013-01-01

    In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay. PMID:23940530

  6. MATIN: A Random Network Coding Based Framework for High Quality Peer-to-Peer Live Video Streaming

    PubMed Central

    Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño

    2013-01-01

    In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay. PMID:23940530

  7. The Next Generation Advanced Video Guidance Sensor: Flight Heritage and Current Development

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Bryan, Thomas C.

    2009-01-01

    The Next Generation Advanced Video Guidance Sensor (NGAVGS) is the latest in a line of sensors that have flown four times in the last 10 years. The NGAVGS has been under development for the last two years as a long-range proximity operations and docking sensor for use in an Automated Rendezvous and Docking (AR&D) system. The first autonomous rendezvous and docking in the history of the U.S. Space Program was successfully accomplished by Orbital Express, using the Advanced Video Guidance Sensor (AVGS) as the primary docking sensor. That flight proved that the United States now has a mature and flight proven sensor technology for supporting Crew Exploration Vehicles (CEV) and Commercial Orbital Transport Systems (COTS) Automated Rendezvous and Docking (AR&D). NASA video sensors have worked well in the past: the AVGS used on the Demonstration of Autonomous Rendezvous Technology (DART) mission operated successfully in "spot mode" out to 2 km, and the first generation rendezvous and docking sensor, the Video Guidance Sensor (VGS), was developed and successfully flown on Space Shuttle flights in 1997 and 1998. This paper presents the flight heritage and results of the sensor technology, some hardware trades for the current sensor, and discusses the needs of future vehicles that may rendezvous and dock with the International Space Station (ISS) and other Constellation vehicles. It also discusses approaches for upgrading AVGS to address parts obsolescence, and concepts for minimizing the sensor footprint, weight, and power requirements. In addition, the testing of the various NGAVGS development units will be discussed along with the use of the NGAVGS as a proximity operations and docking sensor.

  8. Advances in Uniportal Video-Assisted Thoracoscopic Surgery: Pushing the Envelope.

    PubMed

    Gonzalez-Rivas, Diego; Yang, Yang; Ng, Calvin

    2016-05-01

    Uniportal video-assisted thoracic surgery (VATS) represents a radical change in the approach to lung resection compared with conventional VATS. Because the placement of the surgical instruments and the camera is done through the same incision, uniportal VATS can pose a challenge for both the surgeon and the assistant. Recent industry improvements have made single-port VATS easier to learn. We can expect more developments of subcostal or embryonic natural orifice translumenal endoscopic surgery access, improvements in 3D image systems, single-port robotics, and wireless cameras. The advances in digital technology may facilitate the adoption of the uniportal VATS technique. PMID:27112258

  9. Free viewpoint video generation based on coding information of H.264/AVC

    NASA Astrophysics Data System (ADS)

    Lin, Chi-Kun; Hung, Yu-Chen; Tang, Chia-Tong; Hwang, Jenq-Neng; Yang, Jar-Ferr

    2010-07-01

    Free viewpoint television (FTV) is a new technology that allows viewers to change view angles freely while watching TV programs. FTV requires a strong support of multi-view video codec (MVC), such as H.264/MVC defined by Joint Video Team(JVT). In this paper, we propose an FTV system which can produce videos as perceived in any view angles based on limited number of viewpoint videos decoded from H.264/MVC bitstreams. In this system, the decoded disparity vectors and motion vectors are diffused to produce smooth disparity fields for virtual view reconstruction. Decoded residue data under motion compensation are used as a match criterion. The proposed system not only greatly reduces the computation burden in creating FTV, but also improve the synthesized viewing quality due to the use of quarter pixel precision of H.264.

  10. Using self-similarity compensation for improving inter-layer prediction in scalable 3D holoscopic video coding

    NASA Astrophysics Data System (ADS)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2013-09-01

    Holoscopic imaging, also known as integral imaging, has been recently attracting the attention of the research community, as a promising glassless 3D technology due to its ability to create a more realistic depth illusion than the current stereoscopic or multiview solutions. However, in order to gradually introduce this technology into the consumer market and to efficiently deliver 3D holoscopic content to end-users, backward compatibility with legacy displays is essential. Consequently, to enable 3D holoscopic content to be delivered and presented on legacy displays, a display scalable 3D holoscopic coding approach is required. Hence, this paper presents a display scalable architecture for 3D holoscopic video coding with a three-layer approach, where each layer represents a different level of display scalability: Layer 0 - a single 2D view; Layer 1 - 3D stereo or multiview; and Layer 2 - the full 3D holoscopic content. In this context, a prediction method is proposed, which combines inter-layer prediction, aiming to exploit the existing redundancy between the multiview and the 3D holoscopic layers, with self-similarity compensated prediction (previously proposed by the authors for non-scalable 3D holoscopic video coding), aiming to exploit the spatial redundancy inherent to the 3D holoscopic enhancement layer. Experimental results show that the proposed combined prediction can improve significantly the rate-distortion performance of scalable 3D holoscopic video coding with respect to the authors' previously proposed solutions, where only inter-layer or only self-similarity prediction is used.

  11. An overview of new video coding tools under consideration for VP10: the successor to VP9

    NASA Astrophysics Data System (ADS)

    Mukherjee, Debargha; Su, Hui; Bankoski, James; Converse, Alex; Han, Jingning; Liu, Zoe; Xu, Yaowu

    2015-09-01

    Google started an opensource project, entitled the WebM Project, in 2010 to develop royaltyfree video codecs for the web. The present generation codec developed in the WebM project called VP9 was finalized in mid2013 and is currently being served extensively by YouTube, resulting in billions of views per day. Even though adoption of VP9 outside Google is still in its infancy, the WebM project has already embarked on an ambitious project to develop a next edition codec VP10 that achieves at least a generational bitrate reduction over the current generation codec VP9. Although the project is still in early stages, a set of new experimental coding tools have already been added to baseline VP9 to achieve modest coding gains over a large enough test set. This paper provides a technical overview of these coding tools.

  12. Video Traffic Characteristics of Modern Encoding Standards: H.264/AVC with SVC and MVC Extensions and H.265/HEVC

    PubMed Central

    2014-01-01

    Video encoding for multimedia services over communication networks has significantly advanced in recent years with the development of the highly efficient and flexible H.264/AVC video coding standard and its SVC extension. The emerging H.265/HEVC video coding standard as well as 3D video coding further advance video coding for multimedia communications. This paper first gives an overview of these new video coding standards and then examines their implications for multimedia communications by studying the traffic characteristics of long videos encoded with the new coding standards. We review video coding advances from MPEG-2 and MPEG-4 Part 2 to H.264/AVC and its SVC and MVC extensions as well as H.265/HEVC. For single-layer (nonscalable) video, we compare H.265/HEVC and H.264/AVC in terms of video traffic and statistical multiplexing characteristics. Our study is the first to examine the H.265/HEVC traffic variability for long videos. We also illustrate the video traffic characteristics and statistical multiplexing of scalable video encoded with the SVC extension of H.264/AVC as well as 3D video encoded with the MVC extension of H.264/AVC. PMID:24701145

  13. Spacelabs Innovative Project Award winner--2008. Megacode simulation workshop and education video--a megatonne of care and Code blue: live and interactive.

    PubMed

    Loucks, Lynda; Leskowski, Jessica; Fallis, Wendy

    2010-01-01

    Skill acquisition and knowledge translation of best practices can be successfully facilitated using simulation methods. The 2008 Spacelabs Innovative Project Award was awarded for a unique training workshop that used simulation in the area of cardiac life support and resuscitation to train multiple health care personnel in basic and advanced skills. The megacode simulation workshop and education video was an educational event held in 2007 in Winnipeg, MB, for close to 60 participants and trainers from multiple disciplines across the provinces of Manitoba and Northwestern Ontario. The event included lectures, live simulation of a megacode, and hands-on training in the latest techniques in resuscitation. The goals of this project were to promote efficiency and better outcomes related to resuscitation measures, to foster teamwork, to emphasize the importance of each team member's role, and to improve knowledge and skills in resuscitation. The workshop was filmed to produce a training DVD that could be used for future knowledge enhancement and introductory training of health care personnel. Substantial positive feedback was received and evaluations indicated that participants reported improvement and expansion of their knowledge of advanced cardiac life support. Given their regular participation in cardiac arrest codes and the importance of staying up-to-date on best practice, the workshop was particularly useful to health care staff and nurses working in critical care areas. In addition, those who participate less frequently in cardiac resuscitation will benefit from the educational video for ongoing competency. Through accelerating knowledge translation from the literature to the bedside, it is hoped that this event contributed to improved patient care and outcomes with respect to advanced cardiac life support. PMID:20836420

  14. VLSI Neural Networks Help To Compress Video Signals

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Sheu, Bing J.

    1996-01-01

    Advanced analog/digital electronic system for compression of video signals incorporates artificial neural networks. Performs motion-estimation and image-data-compression processing. Effectively eliminates temporal and spatial redundancies of sequences of video images; processes video image data, retaining only nonredundant parts to be transmitted, then transmits resulting data stream in form of efficient code. Reduces bandwidth and storage requirements for transmission and recording of video signal.

  15. Recent advances in the COMMIX and BODYFIT codes

    SciTech Connect

    Sha, W.T.; Chen, B.C.J.; Domanus, H.M.; Wood, P.M.

    1983-01-01

    Two general-purpose computer programs for thermal-hydraulic analysis have been developed. One is the COMMIX (COMponent MIXing code. The other one is the BODYFIT (BOunDary FITted Coordinate Transformation) code. Solution procedures based on both elliptical and parabolic systems of partial differential equations are provided in these two codes. The COMMIX code is designed to provide global analysis of thermal-hydraulic behavior of a component or multicomponent of engineering problems. The BODYFIT code is capable of treating irregular boundaries and gives more detailed local information on a subcomponent or component. These two codes are complementary to each other and represent the state-of-the-art of thermal-hydraulic analysis. Effort will continue to make further improvements and include additional capabilities in these codes.

  16. The Advanced Video Guidance Sensor: Orbital Express and the Next Generation

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Heaton, Andrew F.; Pinson, Robin M.; Carrington, Connie L.; Lee, James E.; Bryan, Thomas C.; Robertson, Bryan A.; Spencer, Susan H.; Johnson, Jimmie E.

    2008-01-01

    The Orbital Express (OE) mission performed the first autonomous rendezvous and docking in the history of the United States on May 5-6, 2007 with the Advanced Video Guidance Sensor (AVGS) acting as one of the primary docking sensors. Since that event, the OE spacecraft performed four more rendezvous and docking maneuvers, each time using the AVGS as one of the docking sensors. The Marshall Space Flight Center's (MSFC's) AVGS is a nearfield proximity operations sensor that was integrated into the Autonomous Rendezvous and Capture Sensor System (ARCSS) on OE. The ARCSS provided the relative state knowledge to allow the OE spacecraft to rendezvous and dock. The AVGS is a mature sensor technology designed to support Automated Rendezvous and Docking (AR&D) operations. It is a video-based laser-illuminated sensor that can determine the relative position and attitude between itself and its target. Due to parts obsolescence, the AVGS that was flown on OE can no longer be manufactured. MSFC has been working on the next generation of AVGS for application to future Constellation missions. This paper provides an overview of the performance of the AVGS on Orbital Express and discusses the work on the Next Generation AVGS (NGAVGS).

  17. Orbital Express Advanced Video Guidance Sensor: Ground Testing, Flight Results and Comparisons

    NASA Technical Reports Server (NTRS)

    Pinson, Robin M.; Howard, Richard T.; Heaton, Andrew F.

    2008-01-01

    Orbital Express (OE) was a successful mission demonstrating automated rendezvous and docking. The 2007 mission consisted of two spacecraft, the Autonomous Space Transport Robotic Operations (ASTRO) and the Next Generation Serviceable Satellite (NEXTSat) that were designed to work together and test a variety of service operations in orbit. The Advanced Video Guidance Sensor, AVGS, was included as one of the primary proximity navigation sensors on board the ASTRO. The AVGS was one of four sensors that provided relative position and attitude between the two vehicles. Marshall Space Flight Center was responsible for the AVGS software and testing (especially the extensive ground testing), flight operations support, and analyzing the flight data. This paper briefly describes the historical mission, the data taken on-orbit, the ground testing that occurred, and finally comparisons between flight data and ground test data for two different flight regimes.

  18. Structured Set Intra Prediction With Discriminative Learning in a Max-Margin Markov Network for High Efficiency Video Coding

    PubMed Central

    Dai, Wenrui; Xiong, Hongkai; Jiang, Xiaoqian; Chen, Chang Wen

    2014-01-01

    This paper proposes a novel model on intra coding for High Efficiency Video Coding (HEVC), which simultaneously predicts blocks of pixels with optimal rate distortion. It utilizes the spatial statistical correlation for the optimal prediction based on 2-D contexts, in addition to formulating the data-driven structural interdependences to make the prediction error coherent with the probability distribution, which is desirable for successful transform and coding. The structured set prediction model incorporates a max-margin Markov network (M3N) to regulate and optimize multiple block predictions. The model parameters are learned by discriminating the actual pixel value from other possible estimates to maximize the margin (i.e., decision boundary bandwidth). Compared to existing methods that focus on minimizing prediction error, the M3N-based model adaptively maintains the coherence for a set of predictions. Specifically, the proposed model concurrently optimizes a set of predictions by associating the loss for individual blocks to the joint distribution of succeeding discrete cosine transform coefficients. When the sample size grows, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. As an internal step, we optimize the underlying Markov network structure to find states that achieve the maximal energy using expectation propagation. For validation, we integrate the proposed model into HEVC for optimal mode selection on rate-distortion optimization. The proposed prediction model obtains up to 2.85% bit rate reduction and achieves better visual quality in comparison to the HEVC intra coding. PMID:25505829

  19. An electrocorticographic BCI using code-based VEP for control in video applications: a single-subject study

    PubMed Central

    Kapeller, Christoph; Kamada, Kyousuke; Ogawa, Hiroshi; Prueckl, Robert; Scharinger, Josef; Guger, Christoph

    2014-01-01

    A brain-computer-interface (BCI) allows the user to control a device or software with brain activity. Many BCIs rely on visual stimuli with constant stimulation cycles that elicit steady-state visual evoked potentials (SSVEP) in the electroencephalogram (EEG). This EEG response can be generated with a LED or a computer screen flashing at a constant frequency, and similar EEG activity can be elicited with pseudo-random stimulation sequences on a screen (code-based BCI). Using electrocorticography (ECoG) instead of EEG promises higher spatial and temporal resolution and leads to more dominant evoked potentials due to visual stimulation. This work is focused on BCIs based on visual evoked potentials (VEP) and its capability as a continuous control interface for augmentation of video applications. One 35 year old female subject with implanted subdural grids participated in the study. The task was to select one out of four visual targets, while each was flickering with a code sequence. After a calibration run including 200 code sequences, a linear classifier was used during an evaluation run to identify the selected visual target based on the generated code-based VEPs over 20 trials. Multiple ECoG buffer lengths were tested and the subject reached a mean online classification accuracy of 99.21% for a window length of 3.15 s. Finally, the subject performed an unsupervised free run in combination with visual feedback of the current selection. Additionally, an algorithm was implemented that allowed to suppress false positive selections and this allowed the subject to start and stop the BCI at any time. The code-based BCI system attained very high online accuracy, which makes this approach very promising for control applications where a continuous control signal is needed. PMID:25147509

  20. Development of a Coding Instrument to Assess the Quality and Content of Anti-Tobacco Video Games.

    PubMed

    Alber, Julia M; Watson, Anna M; Barnett, Tracey E; Mercado, Rebeccah; Bernhardt, Jay M

    2015-07-01

    Previous research has shown the use of electronic video games as an effective method for increasing content knowledge about the risks of drugs and alcohol use for adolescents. Although best practice suggests that theory, health communication strategies, and game appeal are important characteristics for developing games, no instruments are currently available to examine the quality and content of tobacco prevention and cessation electronic games. This study presents the systematic development of a coding instrument to measure the quality, use of theory, and health communication strategies of tobacco cessation and prevention electronic games. Using previous research and expert review, a content analysis coding instrument measuring 67 characteristics was developed with three overarching categories: type and quality of games, theory and approach, and type and format of messages. Two trained coders applied the instrument to 88 games on four platforms (personal computer, Nintendo DS, iPhone, and Android phone) to field test the instrument. Cohen's kappa for each item ranged from 0.66 to 1.00, with an average kappa value of 0.97. Future research can adapt this coding instrument to games addressing other health issues. In addition, the instrument questions can serve as a useful guide for evidence-based game development. PMID:26167842

  1. Development of a Coding Instrument to Assess the Quality and Content of Anti-Tobacco Video Games

    PubMed Central

    Alber, Julia M.; Watson, Anna M.; Barnett, Tracey E.; Mercado, Rebeccah

    2015-01-01

    Abstract Previous research has shown the use of electronic video games as an effective method for increasing content knowledge about the risks of drugs and alcohol use for adolescents. Although best practice suggests that theory, health communication strategies, and game appeal are important characteristics for developing games, no instruments are currently available to examine the quality and content of tobacco prevention and cessation electronic games. This study presents the systematic development of a coding instrument to measure the quality, use of theory, and health communication strategies of tobacco cessation and prevention electronic games. Using previous research and expert review, a content analysis coding instrument measuring 67 characteristics was developed with three overarching categories: type and quality of games, theory and approach, and type and format of messages. Two trained coders applied the instrument to 88 games on four platforms (personal computer, Nintendo DS, iPhone, and Android phone) to field test the instrument. Cohen's kappa for each item ranged from 0.66 to 1.00, with an average kappa value of 0.97. Future research can adapt this coding instrument to games addressing other health issues. In addition, the instrument questions can serve as a useful guide for evidence-based game development. PMID:26167842

  2. NASA. Lewis Research Center Advanced Modulation and Coding Project: Introduction and overview

    NASA Astrophysics Data System (ADS)

    Budinger, James M.

    1992-02-01

    The Advanced Modulation and Coding Project at LeRC is sponsored by the Office of Space Science and Applications, Communications Division, Code EC, at NASA Headquarters and conducted by the Digital Systems Technology Branch of the Space Electronics Division. Advanced Modulation and Coding is one of three focused technology development projects within the branch's overall Processing and Switching Program. The program consists of industry contracts for developing proof-of-concept (POC) and demonstration model hardware, university grants for analyzing advanced techniques, and in-house integration and testing of performance verification and systems evaluation. The Advanced Modulation and Coding Project is broken into five elements: (1) bandwidth- and power-efficient modems; (2) high-speed codecs; (3) digital modems; (4) multichannel demodulators; and (5) very high-data-rate modems. At least one contract and one grant were awarded for each element.

  3. NASA. Lewis Research Center Advanced Modulation and Coding Project: Introduction and overview

    NASA Technical Reports Server (NTRS)

    Budinger, James M.

    1992-01-01

    The Advanced Modulation and Coding Project at LeRC is sponsored by the Office of Space Science and Applications, Communications Division, Code EC, at NASA Headquarters and conducted by the Digital Systems Technology Branch of the Space Electronics Division. Advanced Modulation and Coding is one of three focused technology development projects within the branch's overall Processing and Switching Program. The program consists of industry contracts for developing proof-of-concept (POC) and demonstration model hardware, university grants for analyzing advanced techniques, and in-house integration and testing of performance verification and systems evaluation. The Advanced Modulation and Coding Project is broken into five elements: (1) bandwidth- and power-efficient modems; (2) high-speed codecs; (3) digital modems; (4) multichannel demodulators; and (5) very high-data-rate modems. At least one contract and one grant were awarded for each element.

  4. Distributed video coding for arrays of remote sensing nodes : final report.

    SciTech Connect

    Mecimore, Ivan; Creusere, Chuck D.; Merchant, Bion John

    2010-06-01

    This document is the final report for the Sandia National Laboratory funded Student Fellowship position at New Mexico State University (NMSU) from 2008 to 2010. Ivan Mecimore, the PhD student in Electrical Engineering at NMSU, was conducting research into image and video processing techniques to identify features and correlations within images without requiring the decoding of the data compression. Such an analysis technique would operate on the encoded bit stream, potentially saving considerable processing time when operating on a platform that has limited computational resources. Unfortunately, the student has elected in mid-year not to continue with his research or the fellowship position. The student is unavailable to provide any details of his research for inclusion in this final report. As such, this final report serves solely to document the information provided in the previous end of year summary.

  5. Image and video compression/decompression based on human visual perception system and transform coding

    SciTech Connect

    Fu, Chi Yung., Petrich, L.I., Lee, M.

    1997-02-01

    The quantity of information has been growing exponentially, and the form and mix of information have been shifting into the image and video areas. However, neither the storage media nor the available bandwidth can accommodated the vastly expanding requirements for image information. A vital, enabling technology here is compression/decompression. Our compression work is based on a combination of feature-based algorithms inspired by the human visual- perception system (HVS), and some transform-based algorithms (such as our enhanced discrete cosine transform, wavelet transforms), vector quantization and neural networks. All our work was done on desktop workstations using the C++ programming language and commercially available software. During FY 1996, we explored and implemented an enhanced feature-based algorithms, vector quantization, and neural- network-based compression technologies. For example, we improved the feature compression for our feature-based algorithms by a factor of two to ten, a substantial improvement. We also found some promising results when using neural networks and applying them to some video sequences. In addition, we also investigated objective measures to characterize compression results, because traditional means such as the peak signal- to-noise ratio (PSNR) are not adequate to fully characterize the results, since such measures do not take into account the details of human visual perception. We have successfully used our one- year LDRD funding as seed money to explore new research ideas and concepts, the results of this work have led us to obtain external funding from the dud. At this point, we are seeking matching funds from DOE to match the dud funding so that we can bring such technologies into fruition. 9 figs., 2 tabs.

  6. Rate-Adaptive Video Compression (RAVC) Universal Video Stick (UVS)

    NASA Astrophysics Data System (ADS)

    Hench, David L.

    2009-05-01

    The H.264 video compression standard, aka MPEG 4 Part 10 aka Advanced Video Coding (AVC) allows new flexibility in the use of video in the battlefield. This standard necessitates encoder chips to effectively utilize the increased capabilities. Such chips are designed to cover the full range of the standard with designers of individual products given the capability of selecting the parameters that differentiate a broadcast system from a video conferencing system. The SmartCapture commercial product and the Universal Video Stick (UVS) military versions are about the size of a thumb drive with analog video input and USB (Universal Serial Bus) output and allow the user to select the parameters of imaging to the. Thereby, allowing the user to select video bandwidth (and video quality) using four dimensions of quality, on the fly, without stopping video transmission. The four dimensions are: 1) spatial, change from 720 pixel x 480 pixel to 320 pixel x 360 pixel to 160 pixel x 180 pixel, 2) temporal, change from 30 frames/ sec to 5 frames/sec, 3) transform quality with a 5 to 1 range, 4) and Group of Pictures (GOP) that affects noise immunity. The host processor simply wraps the H.264 network abstraction layer packets into the appropriate network packets. We also discuss the recently adopted scalable amendment to H.264 that will allow limit RAVC at any point in the communication chain by throwing away preselected packets.

  7. Coding Local and Global Binary Visual Features Extracted From Video Sequences.

    PubMed

    Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2015-11-01

    Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the bag-of-visual word model. Several applications, including, for example, visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget while attaining a target level of efficiency. In this paper, we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can conveniently be adopted to support the analyze-then-compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs the visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the compress-then-analyze (CTA) paradigm. In this paper, we experimentally compare the ATC and the CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: 1) homography estimation and 2) content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with the CTA, especially in bandwidth limited scenarios. PMID:26080384

  8. Constant-quality constrained-rate allocation for FGS video coded bitstreams

    NASA Astrophysics Data System (ADS)

    Zhang, Xi Min; Vetro, Anthony; Shi, Yun-Qing; Sun, Huifang

    2002-01-01

    This paper proposes an optimal rate allocation scheme for Fine-Granular Scalability (FGS) coded bitstreams that can achieve constant quality reconstruction of frames under a dynamic rate budget constraint. In doing so, we also aim to minimize the overall distortion at the same time. To achieve this, we propose a novel R-D labeling scheme to characterize the R-D relationship of the source coding process. Specifically, sets of R-D points are extracted during the encoding process and linear interpolation is used to estimate the actual R-D curve of the enhancement layer signal. The extracted R-D information is then used by an enhancement layer transcoder to determine the bits that should be allocated per frame. A sliding window based rate allocation method is proposed to realize constant quality among frames. This scheme is first considered for a single FGS coded source, then extended to operate on multiple sources. With the proposed scheme, the rate allocation can be performed in a single pass, hence the complexity is quite low. Experimental results confirm the effectiveness of the proposed scheme under static and dynamic bandwidth conditions.

  9. Optimizing color fidelity for display devices using contour phase predictive coding for text, graphics, and video content

    NASA Astrophysics Data System (ADS)

    Lebowsky, Fritz

    2013-02-01

    High-end monitors and TVs based on LCD technology continue to increase their native display resolution to 4k2k and beyond. Subsequently, uncompressed pixel data transmission becomes costly when transmitting over cable or wireless communication channels. For motion video content, spatial preprocessing from YCbCr 444 to YCbCr 420 is widely accepted. However, due to spatial low pass filtering in horizontal and vertical direction, quality and readability of small text and graphics content is heavily compromised when color contrast is high in chrominance channels. On the other hand, straight forward YCbCr 444 compression based on mathematical error coding schemes quite often lacks optimal adaptation to visually significant image content. Therefore, we present the idea of detecting synthetic small text fonts and fine graphics and applying contour phase predictive coding for improved text and graphics rendering at the decoder side. Using a predictive parametric (text) contour model and transmitting correlated phase information in vector format across all three color channels combined with foreground/background color vectors of a local color map promises to overcome weaknesses in compression schemes that process luminance and chrominance channels separately. The residual error of the predictive model is being minimized more easily since the decoder is an integral part of the encoder. A comparative analysis based on some competitive solutions highlights the effectiveness of our approach, discusses current limitations with regard to high quality color rendering, and identifies remaining visual artifacts.

  10. Coded aperture Fast Neutron Analysis: Latest design advances

    NASA Astrophysics Data System (ADS)

    Accorsi, Roberto; Lanza, Richard C.

    2001-07-01

    Past studies have showed that materials of concern like explosives or narcotics can be identified in bulk from their atomic composition. Fast Neutron Analysis (FNA) is a nuclear method capable of providing this information even when considerable penetration is needed. Unfortunately, the cross sections of the nuclear phenomena and the solid angles involved are typically small, so that it is difficult to obtain high signal-to-noise ratios in short inspection times. CAFNAaims at combining the compound specificity of FNA with the potentially high SNR of Coded Apertures, an imaging method successfully used in far-field 2D applications. The transition to a near-field, 3D and high-energy problem prevents a straightforward application of Coded Apertures and demands a thorough optimization of the system. In this paper, the considerations involved in the design of a practical CAFNA system for contraband inspection, its conclusions, and an estimate of the performance of such a system are presented as the evolution of the ideas presented in previous expositions of the CAFNA concept.

  11. An Advanced simulation Code for Modeling Inductive Output Tubes

    SciTech Connect

    Thuc Bui; R. Lawrence Ives

    2012-04-27

    During the Phase I program, CCR completed several major building blocks for a 3D large signal, inductive output tube (IOT) code using modern computer language and programming techniques. These included a 3D, Helmholtz, time-harmonic, field solver with a fully functional graphical user interface (GUI), automeshing and adaptivity. Other building blocks included the improved electrostatic Poisson solver with temporal boundary conditions to provide temporal fields for the time-stepping particle pusher as well as the self electric field caused by time-varying space charge. The magnetostatic field solver was also updated to solve for the self magnetic field caused by time changing current density in the output cavity gap. The goal function to optimize an IOT cavity was also formulated, and the optimization methodologies were investigated.

  12. KAN NA! Authentic Chinese Video. Lessons for Intermediate to Advanced Self-Study. [CD-ROM].

    ERIC Educational Resources Information Center

    Fleming, Stephen; Hiple, David; Ning, Cynthia

    This compact disc (CD) offers 20 lessons based on selected Chinese language video clips. Filmed on location in Beijing, these naturalistic video clips consist mainly of unrehearsed interviews with ordinary people. The learner is lead through a series of activities aiding comprehension and learning that sharpen communication strategies and…

  13. Integrating multiple HD video services over tiled display for advanced multi-party collaboration

    NASA Astrophysics Data System (ADS)

    Han, Sangwoo; Kim, Jaeyoun; Choi, Kiho; Kim, JongWon

    2006-10-01

    Multi-party collaborative environments based on AG (Access Grid) are extensively utilized for distance learning, e-science, and other distributed global collaboration events. In such environments, A/V media services play an important role in providing QoE (quality of experience) to participants in collaboration sessions. In this paper, in order to support high-quality user experience in the aspect of video services, we design an integration architecture to combine high-quality video services and a high-resolution tiled display service. In detail, the proposed architecture incorporates video services for DV (digital video) and HDV (high-definition digital video) streaming with a display service to provide methods for decomposable decoding/display for a tiled display system. By implementing the proposed architecture on top of AG, we verify that high-quality collaboration among a couple of collaboration sites can be realized over a multicast-enabled network testbed with improved media quality experience.

  14. ASPECT: An advanced specified-profile evaluation code for tokamaks

    SciTech Connect

    Stotler, D.P.; Reiersen, W.T.; Bateman, G.

    1993-03-01

    A specified-profile, global analysis code has been developed to evaluate the performance of fusion reactor designs. Both steady-state and time-dependent calculations are carried out; the results of the former can be used in defining the parameters of the latter, if desired. In the steady-state analysis, the performance is computed at a density and temperature chosen to be consistent with input limits (e.g., density and beta) of several varieties. The calculation can be made at either the intersection of the two limits or at the point of optimum performance as the density and temperature are varied along the limiting boundaries. Two measures of performance are available for this purpose: the ignition margin or the confinement level required to achieve a prescribed ignition margin. The time-dependent calculation can be configured to yield either the evolution of plasma energy as a function of time or, via an iteration scheme, the amount of auxiliary power required to achieve a desired final plasma energy.

  15. THEHYCO-3DT: Thermal hydrodynamic code for the 3 dimensional transient calculation of advanced LMFBR core

    SciTech Connect

    Vitruk, S.G.; Korsun, A.S.; Ushakov, P.A.

    1995-09-01

    The multilevel mathematical model of neutron thermal hydrodynamic processes in a passive safety core without assemblies duct walls and appropriate computer code SKETCH, consisted of thermal hydrodynamic module THEHYCO-3DT and neutron one, are described. A new effective discretization technique for energy, momentum and mass conservation equations is applied in hexagonal - z geometry. The model adequacy and applicability are presented. The results of the calculations show that the model and the computer code could be used in conceptual design of advanced reactors.

  16. Micromechanics Based Design/Analysis Codes for Advanced Composites

    NASA Technical Reports Server (NTRS)

    Mital, Subodh K.; Murthy, Pappu L. N.; Gyekenyesi, John P.

    2002-01-01

    Advanced high temperature Ceramic Matrix Composites (CMC) hold an enormous potential for use in aero and space related applications specifically for propulsion system components. Consequently, this has led to a multitude of research activities pertaining to fabrication, testing and modeling of these materials. The efforts directed at the development of ceramic matrix composites have focused primarily on improving the properties of the constituents as individual phases. It has, however, become increasingly clear that for CMC to be successfully employed in high temperature applications, research and development efforts should also focus on optimizing the synergistic performance of the constituent phases within the as-produced microstructure of the complex shaped CMC part. Despite their attractive features, the introduction of these materials in a wide spectrum of applications has been excruciatingly slow. The reasons are the high costs associated with the manufacturing and a complete experimental testing and characterization of these materials. Often designers/analysts do not have a consistent set of necessary properties and design allowables to be able to confidently design and analyze structural components made from these composites. Furthermore, the anisotropy of these materials accentuates the burden both on the test engineers and the designers by requiring a vastly increased amount of data/characterization compared to conventional materials.

  17. The Modeling of Advanced BWR Fuel Designs with the NRC Fuel Depletion Codes PARCS/PATHS

    DOE PAGESBeta

    Ward, Andrew; Downar, Thomas J.; Xu, Y.; March-Leuba, Jose A; Thurston, Carl; Hudson, Nathanael H.; Ireland, A.; Wysocki, A.

    2015-04-22

    The PATHS (PARCS Advanced Thermal Hydraulic Solver) code was developed at the University of Michigan in support of U.S. Nuclear Regulatory Commission research to solve the steady-state, two-phase, thermal-hydraulic equations for a boiling water reactor (BWR) and to provide thermal-hydraulic feedback for BWR depletion calculations with the neutronics code PARCS (Purdue Advanced Reactor Core Simulator). The simplified solution methodology, including a three-equation drift flux formulation and an optimized iteration scheme, yields very fast run times in comparison to conventional thermal-hydraulic systems codes used in the industry, while still retaining sufficient accuracy for applications such as BWR depletion calculations. Lastly, themore » capability to model advanced BWR fuel designs with part-length fuel rods and heterogeneous axial channel flow geometry has been implemented in PATHS, and the code has been validated against previously benchmarked advanced core simulators as well as BWR plant and experimental data. We describe the modifications to the codes and the results of the validation in this paper.« less

  18. The Modeling of Advanced BWR Fuel Designs with the NRC Fuel Depletion Codes PARCS/PATHS

    SciTech Connect

    Ward, Andrew; Downar, Thomas J.; Xu, Y.; March-Leuba, Jose A; Thurston, Carl; Hudson, Nathanael H.; Ireland, A.; Wysocki, A.

    2015-04-22

    The PATHS (PARCS Advanced Thermal Hydraulic Solver) code was developed at the University of Michigan in support of U.S. Nuclear Regulatory Commission research to solve the steady-state, two-phase, thermal-hydraulic equations for a boiling water reactor (BWR) and to provide thermal-hydraulic feedback for BWR depletion calculations with the neutronics code PARCS (Purdue Advanced Reactor Core Simulator). The simplified solution methodology, including a three-equation drift flux formulation and an optimized iteration scheme, yields very fast run times in comparison to conventional thermal-hydraulic systems codes used in the industry, while still retaining sufficient accuracy for applications such as BWR depletion calculations. Lastly, the capability to model advanced BWR fuel designs with part-length fuel rods and heterogeneous axial channel flow geometry has been implemented in PATHS, and the code has been validated against previously benchmarked advanced core simulators as well as BWR plant and experimental data. We describe the modifications to the codes and the results of the validation in this paper.

  19. Compiled reports on the applicability of selected codes and standards to advanced reactors

    SciTech Connect

    Benjamin, E.L.; Hoopingarner, K.R.; Markowski, F.J.; Mitts, T.M.; Nickolaus, J.R.; Vo, T.V.

    1994-08-01

    The following papers were prepared for the Office of Nuclear Regulatory Research of the U.S. Nuclear Regulatory Commission under contract DE-AC06-76RLO-1830 NRC FIN L2207. This project, Applicability of Codes and Standards to Advance Reactors, reviewed selected mechanical and electrical codes and standards to determine their applicability to the construction, qualification, and testing of advanced reactors and to develop recommendations as to where it might be useful and practical to revise them to suit the (design certification) needs of the NRC.

  20. Blind Digital Watermarking of Low Bit-Rate Advanced H.264/AVC Compressed Video

    NASA Astrophysics Data System (ADS)

    Xu, Dawen; Wang, Rangding; Wang, Jicheng

    H.264/AVC is becoming a popular video codec for its better compression ratio, lower distortion and applicability to portable electronic devices. Thus, issues of copyright protection and authentication that are appropriate for this standard become very important. In this paper, a blind video watermarking algorithm for H.264/AVC is proposed. The watermark information is embedded directly into H.264/AVC video at the encoder by modifying the quantized DC coefficients in luminance residual blocks slightly. The watermark embedded in the residuals can avoid decompressing the video and to decrease the complexity of the watermarking algorithm. To reduce visual quality degradation caused by DC coefficients modifying, block selection mechanism is introduced to control the modification strength. Experimental results reveal that the proposed scheme can achieve enough robustness while preserving the perceptual quality.

  1. Adaptation of the Advanced Spray Combustion Code to Cavitating Flow Problems

    NASA Technical Reports Server (NTRS)

    Liang, Pak-Yan

    1993-01-01

    A very important consideration in turbopump design is the prediction and prevention of cavitation. Thus far conventional CFD codes have not been generally applicable to the treatment of cavitating flows. Taking advantage of its two-phase capability, the Advanced Spray Combustion Code is being modified to handle flows with transient as well as steady-state cavitation bubbles. The volume-of-fluid approach incorporated into the code is extended and augmented with a liquid phase energy equation and a simple evaporation model. The strategy adopted also successfully deals with the cavity closure issue. Simple test cases will be presented and remaining technical challenges will be discussed.

  2. Observations Regarding Use of Advanced CFD Analysis, Sensitivity Analysis, and Design Codes in MDO

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Hou, Gene J. W.; Taylor, Arthur C., III

    1996-01-01

    Observations regarding the use of advanced computational fluid dynamics (CFD) analysis, sensitivity analysis (SA), and design codes in gradient-based multidisciplinary design optimization (MDO) reflect our perception of the interactions required of CFD and our experience in recent aerodynamic design optimization studies using CFD. Sample results from these latter studies are summarized for conventional optimization (analysis - SA codes) and simultaneous analysis and design optimization (design code) using both Euler and Navier-Stokes flow approximations. The amount of computational resources required for aerodynamic design using CFD via analysis - SA codes is greater than that required for design codes. Thus, an MDO formulation that utilizes the more efficient design codes where possible is desired. However, in the aerovehicle MDO problem, the various disciplines that are involved have different design points in the flight envelope; therefore, CFD analysis - SA codes are required at the aerodynamic 'off design' points. The suggested MDO formulation is a hybrid multilevel optimization procedure that consists of both multipoint CFD analysis - SA codes and multipoint CFD design codes that perform suboptimizations.

  3. Integrated homeland security system with passive thermal imaging and advanced video analytics

    NASA Astrophysics Data System (ADS)

    Francisco, Glen; Tillman, Jennifer; Hanna, Keith; Heubusch, Jeff; Ayers, Robert

    2007-04-01

    for creating initial alerts - we refer to this as software level detection, the next level building block Immersive 3D visual assessment for situational awareness and to manage the reaction process - we refer to this as automated intelligent situational awareness, a third building block Wide area command and control capabilities to allow control from a remote location - we refer to this as the management and process control building block integrating together the lower level building elements. In addition, this paper describes three live installations of complete, total systems that incorporate visible and thermal cameras as well as advanced video analytics. Discussion of both system elements and design is extensive.

  4. Functions of Code-Switching among Iranian Advanced and Elementary Teachers and Students

    ERIC Educational Resources Information Center

    Momenian, Mohammad; Samar, Reza Ghafar

    2011-01-01

    This paper reports on the findings of a study carried out on the advanced and elementary teachers' and students' functions and patterns of code-switching in Iranian English classrooms. This concept has not been adequately examined in L2 (second language) classroom contexts than in outdoor natural contexts. Therefore, besides reporting on the…

  5. Grammar Coding in the "Oxford Advanced Learner's Dictionary of Current English."

    ERIC Educational Resources Information Center

    Wekker, Herman

    1992-01-01

    Focuses on the revised system of grammar coding for verbs in the fourth edition of the "Oxford Advanced Learner's Dictionary of Current English" (OALD4), comparing it with two other similar dictionaries. It is shown that the OALD4 is found to be more favorable on many criteria than the other comparable dictionaries. (16 references) (VWL)

  6. Issues and advances in research methods on video games and cognitive abilities

    PubMed Central

    Sobczyk, Bart; Dobrowolski, Paweł; Skorko, Maciek; Michalak, Jakub; Brzezicka, Aneta

    2015-01-01

    The impact of video game playing on cognitive abilities has been the focus of numerous studies over the last 10 years. Some cross-sectional comparisons indicate the cognitive advantages of video game players (VGPs) over non-players (NVGPs) and the benefits of video game trainings, while others fail to replicate these findings. Though there is an ongoing discussion over methodological practices and their impact on observable effects, some elementary issues, such as the representativeness of recruited VGP groups and lack of genre differentiation have not yet been widely addressed. In this article we present objective and declarative gameplay time data gathered from large samples in order to illustrate how playtime is distributed over VGP populations. The implications of this data are then discussed in the context of previous studies in the field. We also argue in favor of differentiating video games based on their genre when recruiting study samples, as this form of classification reflects the core mechanics that they utilize and therefore provides a measure of insight into what cognitive functions are likely to be engaged most. Additionally, we present the Covert Video Game Experience Questionnaire as an example of how this sort of classification can be applied during the recruitment process. PMID:26483717

  7. Issues and advances in research methods on video games and cognitive abilities.

    PubMed

    Sobczyk, Bart; Dobrowolski, Paweł; Skorko, Maciek; Michalak, Jakub; Brzezicka, Aneta

    2015-01-01

    The impact of video game playing on cognitive abilities has been the focus of numerous studies over the last 10 years. Some cross-sectional comparisons indicate the cognitive advantages of video game players (VGPs) over non-players (NVGPs) and the benefits of video game trainings, while others fail to replicate these findings. Though there is an ongoing discussion over methodological practices and their impact on observable effects, some elementary issues, such as the representativeness of recruited VGP groups and lack of genre differentiation have not yet been widely addressed. In this article we present objective and declarative gameplay time data gathered from large samples in order to illustrate how playtime is distributed over VGP populations. The implications of this data are then discussed in the context of previous studies in the field. We also argue in favor of differentiating video games based on their genre when recruiting study samples, as this form of classification reflects the core mechanics that they utilize and therefore provides a measure of insight into what cognitive functions are likely to be engaged most. Additionally, we present the Covert Video Game Experience Questionnaire as an example of how this sort of classification can be applied during the recruitment process. PMID:26483717

  8. A no-reference bitstream-based perceptual model for video quality estimation of videos affected by coding artifacts and packet losses

    NASA Astrophysics Data System (ADS)

    Pandremmenou, K.; Shahid, M.; Kondi, L. P.; Lövström, B.

    2015-03-01

    In this work, we propose a No-Reference (NR) bitstream-based model for predicting the quality of H.264/AVC video sequences, affected by both compression artifacts and transmission impairments. The proposed model is based on a feature extraction procedure, where a large number of features are calculated from the packet-loss impaired bitstream. Many of the features are firstly proposed in this work, and the specific set of the features as a whole is applied for the first time for making NR video quality predictions. All feature observations are taken as input to the Least Absolute Shrinkage and Selection Operator (LASSO) regression method. LASSO indicates the most important features, and using only them, it is possible to estimate the Mean Opinion Score (MOS) with high accuracy. Indicatively, we point out that only 13 features are able to produce a Pearson Correlation Coefficient of 0.92 with the MOS. Interestingly, the performance statistics we computed in order to assess our method for predicting the Structural Similarity Index and the Video Quality Metric are equally good. Thus, the obtained experimental results verified the suitability of the features selected by LASSO as well as the ability of LASSO in making accurate predictions through sparse modeling.

  9. Advances in low-power visible/thermal IR video image fusion hardware

    NASA Astrophysics Data System (ADS)

    Wolff, Lawrence B.; Socolinsky, Diego A.; Eveland, Christopher K.; Reese, C. E.; Bender, E. J.; Wood, M. V.

    2005-03-01

    Equinox Corporation has developed two new video board products for real-time image fusion of visible (or intensified visible/near-infrared) and thermal (emissive) infrared video. These products can provide unique capabilities to the dismounted soldier, maritime/naval operations and Unmanned Aerial Vehicles (UAVs) with low-power, lightweight, compact and inexpensive FPGA video fusion hardware. For several years Equinox Corporation has been studying and developing image fusion methodologies using the complementary modalities of the visible and thermal infrared wavebands including applications to face recognition, tracking, sensor development and fused image visualization. The video board products incorporate Equinox's proprietary image fusion algorithms into an FPGA architecture with embedded programmable capability. Currently included are (1) user interactive image fusion algorithms that go significantly beyond standard "A+B" fusion providing an intuitive color visualization invariant to distracting illumination changes, (2) generalized image co-registration to compensate for parallax, scale and rotation differences between visible/intensified and thermal IR, as well as non-linear optical and display distortion, and (3) automatic gain control (AGC) for dynamic range adaptation.

  10. Advanced surveillance systems: combining video and thermal imagery for pedestrian detection

    NASA Astrophysics Data System (ADS)

    Torresan, Helene; Turgeon, Benoit; Ibarra-Castanedo, Clemente; Hebert, Patrick; Maldague, Xavier P.

    2004-04-01

    In the current context of increased surveillance and security, more sophisticated surveillance systems are needed. One idea relies on the use of pairs of video (visible spectrum) and thermal infrared (IR) cameras located around premises of interest. To automate the system, a dedicated image processing approach is required, which is described in the paper. The first step in the proposed study is to collect a database of known scenarios both indoor and outdoor with a few pedestrians. These image sequences (video and TIR) are synchronized, geometrically corrected and temperature calibrated. The next step is to develop a segmentation strategy to extract the regions of interest (ROI) corresponding to pedestrians in the images. The retained strategy exploits the motion in the sequences. Next, the ROIs are grouped from image to image separately for both video and TIR sequences before a fusion algorithm proceeds to track and detect humans. This insures a more robust performance. Finally, specific criteria of size and temperature relevant to humans are introduced as well. Results are presented for a few typical situations.

  11. Digital video technologies and their network requirements

    SciTech Connect

    R. P. Tsang; H. Y. Chen; J. M. Brandt; J. A. Hutchins

    1999-11-01

    Coded digital video signals are considered to be one of the most difficult data types to transport due to their real-time requirements and high bit rate variability. In this study, the authors discuss the coding mechanisms incorporated by the major compression standards bodies, i.e., JPEG and MPEG, as well as more advanced coding mechanisms such as wavelet and fractal techniques. The relationship between the applications which use these coding schemes and their network requirements are the major focus of this study. Specifically, the authors relate network latency, channel transmission reliability, random access speed, buffering and network bandwidth with the various coding techniques as a function of the applications which use them. Such applications include High-Definition Television, Video Conferencing, Computer-Supported Collaborative Work (CSCW), and Medical Imaging.

  12. Adapting hierarchical bidirectional inter prediction on a GPU-based platform for 2D and 3D H.264 video coding

    NASA Astrophysics Data System (ADS)

    Rodríguez-Sánchez, Rafael; Martínez, José Luis; Cock, Jan De; Fernández-Escribano, Gerardo; Pieters, Bart; Sánchez, José L.; Claver, José M.; de Walle, Rik Van

    2013-12-01

    The H.264/AVC video coding standard introduces some improved tools in order to increase compression efficiency. Moreover, the multi-view extension of H.264/AVC, called H.264/MVC, adopts many of them. Among the new features, variable block-size motion estimation is one which contributes to high coding efficiency. Furthermore, it defines a different prediction structure that includes hierarchical bidirectional pictures, outperforming traditional Group of Pictures patterns in both scenarios: single-view and multi-view. However, these video coding techniques have high computational complexity. Several techniques have been proposed in the literature over the last few years which are aimed at accelerating the inter prediction process, but there are no works focusing on bidirectional prediction or hierarchical prediction. In this article, with the emergence of many-core processors or accelerators, a step forward is taken towards an implementation of an H.264/AVC and H.264/MVC inter prediction algorithm on a graphics processing unit. The results show a negligible rate distortion drop with a time reduction of up to 98% for the complete H.264/AVC encoder.

  13. Impact of Video Laryngoscopy on Advanced Airway Management by Critical Care Transport Paramedics and Nurses Using the CMAC Pocket Monitor

    PubMed Central

    Boehringer, Bradley; Choate, Michael; Hurwitz, Shelley; Tilney, Peter V. R.; Judge, Thomas

    2015-01-01

    Accurate endotracheal intubation for patients in extremis or at risk of physiologic decompensation is the gold standard for emergency medicine. Field intubation is a complex process and time to intubation, number of attempts, and hypoxia have all been shown to correlate with increases in morbidity and mortality. Expanding laryngoscope technology which incorporates active video, in addition to direct laryngoscopy, offers providers improved and varied tools to employ in management of the advanced airway. Over a nine-year period a helicopter emergency medical services team, comprised of a flight paramedic and flight nurse, intended to intubate 790 patients. Comparative data analysis was performed and demonstrated that the introduction of the CMAC video laryngoscope improved nearly every measure of success in airway management. Overall intubation success increased from 94.9% to 99.0%, first pass success rates increased from 75.4% to 94.9%, combined first and second pass success rates increased from 89.2% to 97.4%, and mean number of intubation attempts decreased from 1.33 to 1.08. PMID:26167501

  14. Aerodynamic analysis of three advanced configurations using the TranAir full-potential code

    NASA Technical Reports Server (NTRS)

    Madson, M. D.; Carmichael, R. L.; Mendoza, J. P.

    1989-01-01

    Computational results are presented for three advanced configurations: the F-16A with wing tip missiles and under wing fuel tanks, the Oblique Wing Research Aircraft, and an Advanced Turboprop research model. These results were generated by the latest version of the TranAir full potential code, which solves for transonic flow over complex configurations. TranAir embeds a surface paneled geometry definition in a uniform rectangular flow field grid, thus avoiding the use of surface conforming grids, and decoupling the grid generation process from the definition of the configuration. The new version of the code locally refines the uniform grid near the surface of the geometry, based on local panel size and/or user input. This method distributes the flow field grid points much more efficiently than the previous version of the code, which solved for a grid that was uniform everywhere in the flow field. TranAir results are presented for the three configurations and are compared with wind tunnel data.

  15. Advances and future needs in particle production and transport code developments

    SciTech Connect

    Mokhov, N.V.; /Fermilab

    2009-12-01

    The next generation of accelerators and ever expanding needs of existing accelerators demand new developments and additions to Monte-Carlo codes, with an emphasis on enhanced modeling of elementary particle and heavy-ion interactions and transport. Challenges arise from extremely high beam energies and beam power, increasing complexity of accelerators and experimental setups, as well as design, engineering and performance constraints. All these put unprecedented requirements on the accuracy of particle production predictions, the capability and reliability of the codes used in planning new accelerator facilities and experiments, the design of machine, target and collimation systems, detectors and radiation shielding and minimization of their impact on environment. Recent advances in widely-used general-purpose all-particle codes are described for the most critical modules such as particle production event generators, elementary particle and heavy ion transport in an energy range which spans up to 17 decades, nuclide inventory and macroscopic impact on materials, and dealing with complex geometry of accelerator and detector structures. Future requirements for developing physics models and Monte-Carlo codes are discussed.

  16. Application of advanced computational codes in the design of an experiment for a supersonic throughflow fan rotor

    NASA Technical Reports Server (NTRS)

    Wood, Jerry R.; Schmidt, James F.; Steinke, Ronald J.; Chima, Rodrick V.; Kunik, William G.

    1987-01-01

    Increased emphasis on sustained supersonic or hypersonic cruise has revived interest in the supersonic throughflow fan as a possible component in advanced propulsion systems. Use of a fan that can operate with a supersonic inlet axial Mach number is attractive from the standpoint of reducing the inlet losses incurred in diffusing the flow from a supersonic flight Mach number to a subsonic one at the fan face. The design of the experiment using advanced computational codes to calculate the components required is described. The rotor was designed using existing turbomachinery design and analysis codes modified to handle fully supersonic axial flow through the rotor. A two-dimensional axisymmetric throughflow design code plus a blade element code were used to generate fan rotor velocity diagrams and blade shapes. A quasi-three-dimensional, thin shear layer Navier-Stokes code was used to assess the performance of the fan rotor blade shapes. The final design was stacked and checked for three-dimensional effects using a three-dimensional Euler code interactively coupled with a two-dimensional boundary layer code. The nozzle design in the expansion region was analyzed with a three-dimensional parabolized viscous code which corroborated the results from the Euler code. A translating supersonic diffuser was designed using these same codes.

  17. Advanced Pellet Cladding Interaction Modeling Using the US DOE CASL Fuel Performance Code: Peregrine

    SciTech Connect

    Jason Hales; Various

    2014-06-01

    The US DOE’s Consortium for Advanced Simulation of LWRs (CASL) program has undertaken an effort to enhance and develop modeling and simulation tools for a virtual reactor application, including high fidelity neutronics, fluid flow/thermal hydraulics, and fuel and material behavior. The fuel performance analysis efforts aim to provide 3-dimensional capabilities for single and multiple rods to assess safety margins and the impact of plant operation and fuel rod design on the fuel thermomechanical- chemical behavior, including Pellet-Cladding Interaction (PCI) failures and CRUD-Induced Localized Corrosion (CILC) failures in PWRs. [1-3] The CASL fuel performance code, Peregrine, is an engineering scale code that is built upon the MOOSE/ELK/FOX computational FEM framework, which is also common to the fuel modeling framework, BISON [4,5]. Peregrine uses both 2-D and 3-D geometric fuel rod representations and contains a materials properties and fuel behavior model library for the UO2 and Zircaloy system common to PWR fuel derived from both open literature sources and the FALCON code [6]. The primary purpose of Peregrine is to accurately calculate the thermal, mechanical, and chemical processes active throughout a single fuel rod during operation in a reactor, for both steady state and off-normal conditions.

  18. Advanced Pellet-Cladding Interaction Modeling using the US DOE CASL Fuel Performance Code: Peregrine

    SciTech Connect

    Montgomery, Robert O.; Capps, Nathan A.; Sunderland, Dion J.; Liu, Wenfeng; Hales, Jason; Stanek, Chris; Wirth, Brian D.

    2014-06-15

    The US DOE’s Consortium for Advanced Simulation of LWRs (CASL) program has undertaken an effort to enhance and develop modeling and simulation tools for a virtual reactor application, including high fidelity neutronics, fluid flow/thermal hydraulics, and fuel and material behavior. The fuel performance analysis efforts aim to provide 3-dimensional capabilities for single and multiple rods to assess safety margins and the impact of plant operation and fuel rod design on the fuel thermo-mechanical-chemical behavior, including Pellet-Cladding Interaction (PCI) failures and CRUD-Induced Localized Corrosion (CILC) failures in PWRs. [1-3] The CASL fuel performance code, Peregrine, is an engineering scale code that is built upon the MOOSE/ELK/FOX computational FEM framework, which is also common to the fuel modeling framework, BISON [4,5]. Peregrine uses both 2-D and 3-D geometric fuel rod representations and contains a materials properties and fuel behavior model library for the UO2 and Zircaloy system common to PWR fuel derived from both open literature sources and the FALCON code [6]. The primary purpose of Peregrine is to accurately calculate the thermal, mechanical, and chemical processes active throughout a single fuel rod during operation in a reactor, for both steady state and off-normal conditions.

  19. The COPERNIC3 project: how AREVA is successfully developing an advanced global fuel rod performance code

    SciTech Connect

    Garnier, Ch.; Mailhe, P.; Sontheimer, F.; Landskron, H.; Deuble, D.; Arimescu, V.I.; Billaux, M.

    2007-07-01

    Fuel performance is a key factor for minimizing operating costs in nuclear plants. One of the important aspects of fuel performance is fuel rod design, based upon reliable tools able to verify the safety of current fuel solutions, prevent potential issues in new core managements and guide the invention of tomorrow's fuels. AREVA is developing its future global fuel rod code COPERNIC3, which is able to calculate the thermal-mechanical behavior of advanced fuel rods in nuclear plants. Some of the best practices to achieve this goal are described, by reviewing the three pillars of a fuel rod code: the database, the modelling and the computer and numerical aspects. At first, the COPERNIC3 database content is described, accompanied by the tools developed to effectively exploit the data. Then is given an overview of the main modelling aspects, by emphasizing the thermal, fission gas release and mechanical sub-models. In the last part, numerical solutions are detailed in order to increase the computational performance of the code, with a presentation of software configuration management solutions. (authors)

  20. Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC).

    SciTech Connect

    Schultz, Peter Andrew

    2011-12-01

    The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. Achieving the objective of modeling the performance of a disposal scenario requires describing processes involved in waste form degradation and radionuclide release at the subcontinuum scale, beginning with mechanistic descriptions of chemical reactions and chemical kinetics at the atomic scale, and upscaling into effective, validated constitutive models for input to high-fidelity continuum scale codes for coupled multiphysics simulations of release and transport. Verification and validation (V&V) is required throughout the system to establish evidence-based metrics for the level of confidence in M&S codes and capabilities, including at the subcontiunuum scale and the constitutive models they inform or generate. This Report outlines the nature of the V&V challenge at the subcontinuum scale, an approach to incorporate V&V concepts into subcontinuum scale modeling and simulation (M&S), and a plan to incrementally incorporate effective V&V into subcontinuum scale M&S destined for use in the NEAMS Waste IPSC work flow to meet requirements of quantitative confidence in the constitutive models informed by subcontinuum scale phenomena.

  1. Video Golf

    NASA Technical Reports Server (NTRS)

    1995-01-01

    George Nauck of ENCORE!!! invented and markets the Advanced Range Performance (ARPM) Video Golf System for measuring the result of a golf swing. After Nauck requested their assistance, Marshall Space Flight Center scientists suggested video and image processing/computing technology, and provided leads on commercial companies that dealt with the pertinent technologies. Nauck contracted with Applied Research Inc. to develop a prototype. The system employs an elevated camera, which sits behind the tee and follows the flight of the ball down range, catching the point of impact and subsequent roll. Instant replay of the video on a PC monitor at the tee allows measurement of the carry and roll. The unit measures distance and deviation from the target line, as well as distance from the target when one is selected. The information serves as an immediate basis for making adjustments or as a record of skill level progress for golfers.

  2. Advancing Kohlberg through Codes: Using Professional Codes To Reach the Moral Reasoning Objective in Undergraduate Ethics Courses.

    ERIC Educational Resources Information Center

    Whitehouse, Ginny; Ingram, Michael T.

    The development of moral reasoning as a key course objective in undergraduate communication ethics classes can be accomplished by the critical and deliberate introduction of professional codes of ethics and the internalization of values found in those codes. Notably, "fostering moral reasoning skills" and "surveying current ethical practice" were…

  3. A fast mode decision algorithm for multiview auto-stereoscopic 3D video coding based on mode and disparity statistic analysis

    NASA Astrophysics Data System (ADS)

    Ding, Cong; Sang, Xinzhu; Zhao, Tianqi; Yan, Binbin; Leng, Junmin; Yuan, Jinhui; Zhang, Ying

    2012-11-01

    Multiview video coding (MVC) is essential for applications of the auto-stereoscopic three-dimensional displays. However, the computational complexity of MVC encoders is tremendously huge. Fast algorithms are very desirable for the practical applications of MVC. Based on joint early termination , the selection of inter-view prediction and the optimization of the process of Inter8×8 modes by comparison, a fast macroblock(MB) mode selection algorithm is presented. Comparing with the full mode decision in MVC, the experimental results show that the proposed algorithm can reduce up to 78.13% on average and maximum 90.21% encoding time with a little increase in bit rates and loss in PSNR.

  4. Flight investigation of cockpit-displayed traffic information utilizing coded symbology in an advanced operational environment

    NASA Technical Reports Server (NTRS)

    Abbott, T. S.; Moen, G. C.; Person, L. H., Jr.; Keyser, G. L., Jr.; Yenni, K. R.; Garren, J. F., Jr.

    1980-01-01

    Traffic symbology was encoded to provide additional information concerning the traffic, which was displayed on the pilot's electronic horizontal situation indicators (EHSI). A research airplane representing an advanced operational environment was used to assess the benefit of coded traffic symbology in a realistic work-load environment. Traffic scenarios, involving both conflict-free and conflict situations, were employed. Subjective pilot commentary was obtained through the use of a questionnaire and extensive pilot debriefings. These results grouped conveniently under two categories: display factors and task performance. A major item under the display factor category was the problem of display clutter. The primary contributors to clutter were the use of large map-scale factors, the use of traffic data blocks, and the presentation of more than a few airplanes. In terms of task performance, the cockpit-displayed traffic information was found to provide excellent overall situation awareness. Additionally, mile separation prescribed during these tests.

  5. Assessment of SFR fuel pin performance codes under advanced fuel for minor actinide transmutation

    SciTech Connect

    Bouineau, V.; Lainet, M.; Chauvin, N.; Pelletier, M.

    2013-07-01

    Americium is a strong contributor to the long term radiotoxicity of high activity nuclear waste. Transmutation by irradiation in nuclear reactors of long-lived nuclides like {sup 241}Am is, therefore, an option for the reduction of radiotoxicity and residual power packages as well as the repository area. In the SUPERFACT Experiment four different oxide fuels containing high and low concentrations of {sup 237}Np and {sup 241}Am, representing the homogeneous and heterogeneous in-pile recycling concepts, were irradiated in the PHENIX reactor. The behavior of advanced fuel materials with minor actinide needs to be fully characterized, understood and modeled in order to optimize the design of this kind of fuel elements and to evaluate its performances. This paper assesses the current predictability of fuel performance codes TRANSURANUS and GERMINAL V2 on the basis of post irradiation examinations of the SUPERFACT experiment for pins with low minor actinide content. Their predictions have been compared to measured data in terms of geometrical changes of fuel and cladding, fission gases behavior and actinide and fission product distributions. The results are in good agreement with the experimental results, although improvements are also pointed out for further studies, especially if larger content of minor actinide will be taken into account in the codes. (authors)

  6. Non-coding RNAs deregulation in oral squamous cell carcinoma: advances and challenges.

    PubMed

    Yu, T; Li, C; Wang, Z; Liu, K; Xu, C; Yang, Q; Tang, Y; Wu, Y

    2016-05-01

    Oral squamous cell carcinoma (OSCC) is a common cause of cancer death. Despite decades of improvements in exploring new treatments and considerable advance in multimodality treatment, satisfactory curative rates have not yet been reached. The difficulty of early diagnosis and the high prevalence of metastasis associated with OSCC contribute to its dismal prognosis. In the last few decades the emerging data from both tumor biology and clinical trials led to growing interest in the research for predictive biomarkers. Non-coding RNAs (ncRNAs) are promising biomarkers. Among numerous kinds of ncRNAs, short ncRNAs, such as microRNAs (miRNAs), have been extensively investigated with regard to their biogenesis, function, and importance in carcinogenesis. In contrast to miRNAs, long non-coding RNAs (lncRNAs) are much less known concerning their functions in human cancers especially in OSCC. The present review highlighted the roles of miRNAs and newly discovered lncRNAs in oral tumorigenesis, metastasis, and their clinical implication. PMID:26370423

  7. Development and Validation of ARKAS cellule: An Advanced Core-Bowing Analysis Code for Fast Reactors

    SciTech Connect

    Ohta, Hirokazu; Yokoo, Takeshi; Nakagawa, Masatoshi; Matsuyama, Shinichiro

    2004-05-15

    An advanced analysis code, ARKAS cellule, has been developed to determine the core distortion and the mechanical behavior of fast reactors. In this code, each hexagonal subassembly duct is represented by a folded thin plate structure divided into a user-specified number of shell elements so that the interduct contact forms and the cross-sectional distortion effect of each duct are properly taken into account. In this paper, the numerical model of the ARKAS cellule code is introduced, and the analytical results for two validation problems are presented. From a single duct compaction analysis, the first validation problem, it is clarified that the new analytical model is applicable to simulating the change of duct compaction stiffness that depends on the loading conditions such as the loading pad forms and the number of contact faces. The second validation analysis has been conducted by comparison with the experimental values obtained by the National Nuclear Corporation Limited in the United Kingdom using the core restraint uniplanar experimental rig (CRUPER), an ex-reactor rig in which a cluster of 91 short ducts is compressed by 30 movable peripheral rams toward the center of the cluster in seven stages. The analysis clarified that the predictions obtained using ARKAS cellule agree well with the measured ram loads and interwrapper gap widths during the compaction sequence. One may conclude that ARKAS cellule is valid for quantitative analysis of the core mechanical behavior and will be particularly useful for the evaluation of transient deformation of core assemblies during accidents in which the distortion of loading pads have important effects on obtaining favorable reactivity feedback.

  8. A Complex-Geometry Validation Experiment for Advanced Neutron Transport Codes

    SciTech Connect

    David W. Nigg; Anthony W. LaPorta; Joseph W. Nielsen; James Parry; Mark D. DeHart; Samuel E. Bays; William F. Skerjanc

    2013-11-01

    The Idaho National Laboratory (INL) has initiated a focused effort to upgrade legacy computational reactor physics software tools and protocols used for support of core fuel management and experiment management in the Advanced Test Reactor (ATR) and its companion critical facility (ATRC) at the INL.. This will be accomplished through the introduction of modern high-fidelity computational software and protocols, with appropriate new Verification and Validation (V&V) protocols, over the next 12-18 months. Stochastic and deterministic transport theory based reactor physics codes and nuclear data packages that support this effort include MCNP5[1], SCALE/KENO6[2], HELIOS[3], SCALE/NEWT[2], and ATTILA[4]. Furthermore, a capability for sensitivity analysis and uncertainty quantification based on the TSUNAMI[5] system has also been implemented. Finally, we are also evaluating the Serpent[6] and MC21[7] codes, as additional verification tools in the near term as well as for possible applications to full three-dimensional Monte Carlo based fuel management modeling in the longer term. On the experimental side, several new benchmark-quality code validation measurements based on neutron activation spectrometry have been conducted using the ATRC. Results for the first four experiments, focused on neutron spectrum measurements within the Northwest Large In-Pile Tube (NW LIPT) and in the core fuel elements surrounding the NW LIPT and the diametrically opposite Southeast IPT have been reported [8,9]. A fifth, very recent, experiment focused on detailed measurements of the element-to-element core power distribution is summarized here and examples of the use of the measured data for validation of corresponding MCNP5, HELIOS, NEWT, and Serpent computational models using modern least-square adjustment methods are provided.

  9. From Video to Photo

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Ever wonder whether a still shot from a home video could serve as a "picture perfect" photograph worthy of being framed and proudly displayed on the mantle? Wonder no more. A critical imaging code used to enhance video footage taken from spaceborne imaging instruments is now available within a portable photography tool capable of producing an optimized, high-resolution image from multiple video frames.

  10. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC) : gap analysis for high fidelity and performance assessment code development.

    SciTech Connect

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng

    2011-03-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are

  11. An Extension of the Athena++ Code Framework for GRMHD Based on Advanced Riemann Solvers and Staggered-mesh Constrained Transport

    NASA Astrophysics Data System (ADS)

    White, Christopher J.; Stone, James M.; Gammie, Charles F.

    2016-08-01

    We present a new general relativistic magnetohydrodynamics (GRMHD) code integrated into the Athena++ framework. Improving upon the techniques used in most GRMHD codes, ours allows the use of advanced, less diffusive Riemann solvers, in particular HLLC and HLLD. We also employ a staggered-mesh constrained transport algorithm suited for curvilinear coordinate systems in order to maintain the divergence-free constraint of the magnetic field. Our code is designed to work with arbitrary stationary spacetimes in one, two, or three dimensions, and we demonstrate its reliability through a number of tests. We also report on its promising performance and scalability.

  12. Recent advances in coding theory for near error-free communications

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.

    1991-01-01

    Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.

  13. Motion estimation optimization in a MPEG-1-like video coding scheme for low-bit-rate applications

    NASA Astrophysics Data System (ADS)

    Roser, Miguel; Villegas, Paulo

    1994-05-01

    In this paper we present a work based on a coding algorithm for visual information that follows the International Standard ISO-IEC IS 11172, `Coding of Moving Pictures and Associated Audio for Digital Storage Media up to about 1.5 Mbit/s', widely known as MPEG1. The main intention in the definition of the MPEG 1 standard was to provide a large degree of flexibility to be used in many different applications. The interest of this paper is to adapt the MPEG 1 scheme for low bitrate operation and optimize it for special situations, as for example, a talking head with low movement, which is a usual situation in videotelephony application. An adapted and compatible MPEG 1 scheme, previously developed, able to operate at px8 Kbit/s will be used in this work. Looking for a low complexity scheme and taking into account that the most expensive (from the point of view of consumed computer time) step in the scheme is the motion estimation process (almost 80% of the total computer time is spent on the ME), an improvement of the motion estimation module based on the use of a new search pattern is presented in this paper.

  14. Frequency sensitivity for video compression

    NASA Astrophysics Data System (ADS)

    Seo, Guiwon; Lee, Jonghwa; Lee, Chulhee

    2014-03-01

    We investigate the frequency sensitivity of the human visual system, which reacts differently at different frequencies in video coding. Based on this observation, we used different quantization steps for different frequency components in order to explore the possibility of improving coding efficiency while maintaining perceptual video quality. In other words, small quantization steps were used for sensitive frequency components while large quantization steps were used for less sensitive frequency components. We performed subjective testing to examine the perceptual video quality of video sequences encoded by the proposed method. The experimental results showed that a reduction in bitrate is possible without causing a decrease in perceptual video quality.

  15. Intercode Advanced Fuels and Cladding Comparison Using BISON, FRAPCON, and FEMAXI Fuel Performance Codes

    NASA Astrophysics Data System (ADS)

    Rice, Aaren

    As part of the Department of Energy's Accident Tolerant Fuels (ATF) campaign, new cladding designs and fuel types are being studied in order to help make nuclear energy a safer and more affordable source for power. This study focuses on the implementation and analysis of the SiC cladding and UN, UC, and U3Si2 fuels into three specific nuclear fuel performance codes: BISON, FRAPCON, and FEMAXI. These fuels boast a higher thermal conductivity and uranium density than traditional UO2 fuel which could help lead to longer times in a reactor environment. The SiC cladding has been studied for its reduced production of hydrogen gas during an accident scenario, however the SiC cladding is a known brittle and unyielding material that may fracture during PCMI (Pellet Cladding Mechanical Interaction). This work focuses on steady-state operation with advanced fuel and cladding combinations. By implementing and performing analysis work with these materials, it is possible to better understand some of the mechanical interactions that could be seen as limiting factors. In addition to the analysis of the materials themselves, a further analysis is done on the effects of using a fuel creep model in combination with the SiC cladding. While fuel creep is commonly ignored in the traditional UO2 fuel and Zircaloy cladding systems, fuel creep can be a significant factor in PCMI with SiC.

  16. Advanced turboprop noise prediction: Development of a code at NASA Langley based on recent theoretical results

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Dunn, M. H.; Padula, S. L.

    1986-01-01

    The development of a high speed propeller noise prediction code at Langley Research Center is described. The code utilizes two recent acoustic formulations in the time domain for subsonic and supersonic sources. The structure and capabilities of the code are discussed. Grid size study for accuracy and speed of execution on a computer is also presented. The code is tested against an earlier Langley code. Considerable increase in accuracy and speed of execution are observed. Some examples of noise prediction of a high speed propeller for which acoustic test data are available are given. A brisk derivation of formulations used is given in an appendix.

  17. Multiview video codec based on KTA techniques

    NASA Astrophysics Data System (ADS)

    Seo, Jungdong; Kim, Donghyun; Ryu, Seungchul; Sohn, Kwanghoon

    2011-03-01

    Multi-view video coding (MVC) is a video coding standard developed by MPEG and VCEG for multi-view video. It showed average PSNR gain of 1.5dB compared with view-independent coding by H.264/AVC. However, because resolutions of multi-view video are getting higher for more realistic 3D effect, high performance video codec is needed. MVC adopted hierarchical B-picture structure and inter-view prediction as core techniques. The hierarchical B-picture structure removes the temporal redundancy, and the inter-view prediction reduces the inter-view redundancy by compensated prediction from the reconstructed neighboring views. Nevertheless, MVC has inherent limitation in coding efficiency, because it is based on H.264/AVC. To overcome the limit, an enhanced video codec for multi-view video based on Key Technology Area (KTA) is proposed. KTA is a high efficiency video codec by Video Coding Expert Group (VCEG), and it was carried out for coding efficiency beyond H.264/AVC. The KTA software showed better coding gain than H.264/AVC by using additional coding techniques. The techniques and the inter-view prediction are implemented into the proposed codec, which showed high coding gain compared with the view-independent coding result by KTA. The results presents that the inter-view prediction can achieve higher efficiency in a multi-view video codec based on a high performance video codec such as HEVC.

  18. Modeling Constituent Redistribution in U-Pu-Zr Metallic Fuel Using the Advanced Fuel Performance Code BISON

    SciTech Connect

    Douglas Porter; Steve Hayes; Various

    2014-06-01

    The Advanced Fuels Campaign (AFC) metallic fuels currently being tested have higher zirconium and plutonium concentrations than those tested in the past in EBR reactors. Current metal fuel performance codes have limitations and deficiencies in predicting AFC fuel performance, particularly in the modeling of constituent distribution. No fully validated code exists due to sparse data and unknown modeling parameters. Our primary objective is to develop an initial analysis tool by incorporating state-of-the-art knowledge, constitutive models and properties of AFC metal fuels into the MOOSE/BISON (1) framework in order to analyze AFC metallic fuel tests.

  19. Classroom Videos in Professional Development

    ERIC Educational Resources Information Center

    Chavez, Alma Fabiola Rangel

    2007-01-01

    Due to the recent advances in video technology, an increased incorporation of videos and multimedia materials is used in teacher education, commonly for demonstration of good practices or as a reflection tool for teacher professional development. However, video cases can never fully replicate the complexity of working in a real classroom. Watching…

  20. Advancing methods for reliably assessing motivational interviewing fidelity using the Motivational Interviewing Skills Code

    PubMed Central

    Lord, Sarah Peregrine; Can, Doğan; Yi, Michael; Marin, Rebeca; Dunn, Christopher W.; Imel, Zac E.; Georgiou, Panayiotis; Narayanan, Shrikanth; Steyvers, Mark; Atkins, David C.

    2014-01-01

    The current paper presents novel methods for collecting MISC data and accurately assessing reliability of behavior codes at the level of the utterance. The MISC 2.1 was used to rate MI interviews from five randomized trials targeting alcohol and drug use. Sessions were coded at the utterance-level. Utterance-based coding reliability was estimated using three methods and compared to traditional reliability estimates of session tallies. Session-level reliability was generally higher compared to reliability using utterance-based codes, suggesting that typical methods for MISC reliability may be biased. These novel methods in MI fidelity data collection and reliability assessment provided rich data for therapist feedback and further analyses. Beyond implications for fidelity coding, utterance-level coding schemes may elucidate important elements in the counselor-client interaction that could inform theories of change and the practice of MI. PMID:25242192

  1. Advancing methods for reliably assessing motivational interviewing fidelity using the motivational interviewing skills code.

    PubMed

    Lord, Sarah Peregrine; Can, Doğan; Yi, Michael; Marin, Rebeca; Dunn, Christopher W; Imel, Zac E; Georgiou, Panayiotis; Narayanan, Shrikanth; Steyvers, Mark; Atkins, David C

    2015-02-01

    The current paper presents novel methods for collecting MISC data and accurately assessing reliability of behavior codes at the level of the utterance. The MISC 2.1 was used to rate MI interviews from five randomized trials targeting alcohol and drug use. Sessions were coded at the utterance-level. Utterance-based coding reliability was estimated using three methods and compared to traditional reliability estimates of session tallies. Session-level reliability was generally higher compared to reliability using utterance-based codes, suggesting that typical methods for MISC reliability may be biased. These novel methods in MI fidelity data collection and reliability assessment provided rich data for therapist feedback and further analyses. Beyond implications for fidelity coding, utterance-level coding schemes may elucidate important elements in the counselor-client interaction that could inform theories of change and the practice of MI. PMID:25242192

  2. Video on phone lines: technology and applications

    NASA Astrophysics Data System (ADS)

    Hsing, T. Russell

    1996-03-01

    Recent advances in communications signal processing and VLSI technology are fostering tremendous interest in transmitting high-speed digital data over ordinary telephone lines at bit rates substantially above the ISDN Basic Access rate (144 Kbit/s). Two new technologies, high-bit-rate digital subscriber lines and asymmetric digital subscriber lines promise transmission over most of the embedded loop plant at 1.544 Mbit/s and beyond. Stimulated by these research promises and rapid advances on video coding techniques and the standards activity, information networks around the globe are now exploring possible business opportunities of offering quality video services (such as distant learning, telemedicine, and telecommuting etc.) through this high-speed digital transport capability in the copper loop plant. Visual communications for residential customers have become more feasible than ever both technically and economically.

  3. Ideas for Advancing Code Sharing: A Different Kind of Hack Day

    NASA Astrophysics Data System (ADS)

    Teuben, P.; Allen, A.; Berriman, B.; DuPrie, K.; Hanisch, R. J.; Mink, J.; Nemiroff, R. J.; Shamir, L.; Shortridge, K.; Taylor, M. B.; Wallin, J. F.

    2014-05-01

    How do we as a community encourage the reuse of software for telescope operations, data processing, and ? How can we support making codes used in research available for others to examine? Continuing the discussion from last year Bring out your codes! BoF session, participants separated into groups to brainstorm ideas to mitigate factors which inhibit code sharing and nurture those which encourage code sharing. The BoF concluded with the sharing of ideas that arose from the brainstorming sessions and a brief summary by the moderator.

  4. Observations on computational methodologies for use in large-scale, gradient-based, multidisciplinary design incorporating advanced CFD codes

    NASA Technical Reports Server (NTRS)

    Newman, P. A.; Hou, G. J.-W.; Jones, H. E.; Taylor, A. C., III; Korivi, V. M.

    1992-01-01

    How a combination of various computational methodologies could reduce the enormous computational costs envisioned in using advanced CFD codes in gradient based optimized multidisciplinary design (MdD) procedures is briefly outlined. Implications of these MdD requirements upon advanced CFD codes are somewhat different than those imposed by a single discipline design. A means for satisfying these MdD requirements for gradient information is presented which appear to permit: (1) some leeway in the CFD solution algorithms which can be used; (2) an extension to 3-D problems; and (3) straightforward use of other computational methodologies. Many of these observations have previously been discussed as possibilities for doing parts of the problem more efficiently; the contribution here is observing how they fit together in a mutually beneficial way.

  5. A Mode Propagation Database Suitable for Code Validation Utilizing the NASA Glenn Advanced Noise Control Fan and Artificial Sources

    NASA Technical Reports Server (NTRS)

    Sutliff, Daniel L.

    2014-01-01

    The NASA Glenn Research Center's Advanced Noise Control Fan (ANCF) was developed in the early 1990s to provide a convenient test bed to measure and understand fan-generated acoustics, duct propagation, and radiation to the farfield. A series of tests were performed primarily for the use of code validation and tool validation. Rotating Rake mode measurements were acquired for parametric sets of: (1) mode blockage, (2) liner insertion loss, (3) short ducts, and (4) mode reflection.

  6. A Mode Propagation Database Suitable for Code Validation Utilizing the NASA Glenn Advanced Noise Control Fan and Artificial Sources

    NASA Technical Reports Server (NTRS)

    Sutliff, Daniel L.

    2014-01-01

    The NASA Glenn Research Center's Advanced Noise Control Fan (ANCF) was developed in the early 1990s to provide a convenient test bed to measure and understand fan-generated acoustics, duct propagation, and radiation to the farfield. A series of tests were performed primarily for the use of code validation and tool validation. Rotating Rake mode measurements were acquired for parametric sets of: (i) mode blockage, (ii) liner insertion loss, (iii) short ducts, and (iv) mode reflection.

  7. Advanced Subsonic Technology (AST) Area of Interest (AOI) 6: Develop and Validate Aeroelastic Codes for Turbomachinery

    NASA Technical Reports Server (NTRS)

    Gardner, Kevin D.; Liu, Jong-Shang; Murthy, Durbha V.; Kruse, Marlin J.; James, Darrell

    1999-01-01

    AlliedSignal Engines, in cooperation with NASA GRC (National Aeronautics and Space Administration Glenn Research Center), completed an evaluation of recently-developed aeroelastic computer codes using test cases from the AlliedSignal Engines fan blisk and turbine databases. Test data included strain gage, performance, and steady-state pressure information obtained for conditions where synchronous or flutter vibratory conditions were found to occur. Aeroelastic codes evaluated included quasi 3-D UNSFLO (MIT Developed/AE Modified, Quasi 3-D Aeroelastic Computer Code), 2-D FREPS (NASA-Developed Forced Response Prediction System Aeroelastic Computer Code), and 3-D TURBO-AE (NASA/Mississippi State University Developed 3-D Aeroelastic Computer Code). Unsteady pressure predictions for the turbine test case were used to evaluate the forced response prediction capabilities of each of the three aeroelastic codes. Additionally, one of the fan flutter cases was evaluated using TURBO-AE. The UNSFLO and FREPS evaluation predictions showed good agreement with the experimental test data trends, but quantitative improvements are needed. UNSFLO over-predicted turbine blade response reductions, while FREPS under-predicted them. The inviscid TURBO-AE turbine analysis predicted no discernible blade response reduction, indicating the necessity of including viscous effects for this test case. For the TURBO-AE fan blisk test case, significant effort was expended getting the viscous version of the code to give converged steady flow solutions for the transonic flow conditions. Once converged, the steady solutions provided an excellent match with test data and the calibrated DAWES (AlliedSignal 3-D Viscous Steady Flow CFD Solver). However, efforts expended establishing quality steady-state solutions prevented exercising the unsteady portion of the TURBO-AE code during the present program. AlliedSignal recommends that unsteady pressure measurement data be obtained for both test cases examined

  8. Dashboard Videos

    ERIC Educational Resources Information Center

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-01-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his "Lab Out Loud" blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing…

  9. Advancements and performance of iterative methods in industrial applications codes on CRAY parallel/vector supercomputers

    SciTech Connect

    Poole, G.; Heroux, M.

    1994-12-31

    This paper will focus on recent work in two widely used industrial applications codes with iterative methods. The ANSYS program, a general purpose finite element code widely used in structural analysis applications, has now added an iterative solver option. Some results are given from real applications comparing performance with the tradition parallel/vector frontal solver used in ANSYS. Discussion of the applicability of iterative solvers as a general purpose solver will include the topics of robustness, as well as memory requirements and CPU performance. The FIDAP program is a widely used CFD code which uses iterative solvers routinely. A brief description of preconditioners used and some performance enhancements for CRAY parallel/vector systems is given. The solution of large-scale applications in structures and CFD includes examples from industry problems solved on CRAY systems.

  10. Advanced Automated Solar Filament Detection and Characterization Code: Description, Performance, and Results

    NASA Astrophysics Data System (ADS)

    Bernasconi, P. N.; Rust, D. M.

    2004-12-01

    We have developed a code for automated detection and classification of solar filaments in full-disk H-alpha images that can contribute to Living With a Star science investigations and space weather forecasting. The program can reliably identify filaments, determine their chirality and other relevant parameters like the filaments area and their average orientation with respect to the equator, and is capable of tracking the day-by-day evolution of filaments while they travel across the visible disk. Detecting the filaments when they appear and tracking their evolution can provide not only early warnings of potentially hazardous conditions but also improve our understanding of solar filaments and their implications for space weather at 1 AU. The code was recently tested by analyzing daily H-alpha images taken at the Big Bear Solar Observatory during a period of four years (from mid 2000 until mid 2004). It identified and established the chirality of more than 5000 filaments without human intervention. We compared the results with the filament list manually compiled by Pevtsov et al. (2003) over the same period of time. The computer list matches the Pevtsov et al. list fairly well. The code results confirm the hemispherical chirality rule: dextral filaments predominate in the north and sinistral ones predominate in the south. The main difference between the two lists is that the code finds significantly more filaments without an identifiable chirality. This may be due to a tendency of human operators to be biased, thereby assigning a chirality in less clear cases, while the code is totally unbiased. We also have found evidence that filaments with definite chirality tend to be larger and last longer than the ones without a clear chirality signature. We will describe the major code characteristics and present and discuss the tests results.

  11. SKIRT: An advanced dust radiative transfer code with a user-friendly architecture

    NASA Astrophysics Data System (ADS)

    Camps, P.; Baes, M.

    2015-03-01

    We discuss the architecture and design principles that underpin the latest version of SKIRT, a state-of-the-art open source code for simulating continuum radiation transfer in dusty astrophysical systems, such as spiral galaxies and accretion disks. SKIRT employs the Monte Carlo technique to emulate the relevant physical processes including scattering, absorption and emission by the dust. The code features a wealth of built-in geometries, radiation source spectra, dust characterizations, dust grids, and detectors, in addition to various mechanisms for importing snapshots generated by hydrodynamical simulations. The configuration for a particular simulation is defined at run-time through a user-friendly interface suitable for both occasional and power users. These capabilities are enabled by careful C++ code design. The programming interfaces between components are well defined and narrow. Adding a new feature is usually as simple as adding another class; the user interface automatically adjusts to allow configuring the new options. We argue that many scientific codes, like SKIRT, can benefit from careful object-oriented design and from a friendly user interface, even if it is not a graphical user interface.

  12. Advanced Automated Solar Filament Detection And Characterization Code: Description, Performance, And Results

    NASA Astrophysics Data System (ADS)

    Bernasconi, Pietro N.; Rust, David M.; Hakim, Daniel

    2005-05-01

    We present a code for automated detection, classification, and tracking of solar filaments in full-disk Hα images that can contribute to Living With a Star science investigations and space weather forecasting. The program can reliably identify filaments; determine their chirality and other relevant parameters like filament area, length, and average orientation with respect to the equator. It is also capable of tracking the day-by-day evolution of filaments while they travel across the visible disk. The code was tested by analyzing daily Hα images taken at the Big Bear Solar Observatory from mid-2000 until beginning of 2005. It identified and established the chirality of thousands of filaments without human intervention. We compared the results with a list of filament proprieties manually compiled by Pevtsov, Balasubramaniam and Rogers (2003) over the same period of time. The computer list matches Pevtsov's list with a 72% accuracy. The code results confirm the hemispheric chirality rule stating that dextral filaments predominate in the north and sinistral ones predominate in the south. The main difference between the two lists is that the code finds significantly more filaments without an identifiable chirality. This may be due to a tendency of human operators to be biased, thereby assigning a chirality in less clear cases, while the code is totally unbiased. We also have found evidence that filaments obeying the chirality rule tend to be larger and last longer than the ones that do not follow the hemispherical rule. Filaments adhering to the hemispheric rule also tend to be more tilted toward the equator between latitudes 10∘ and 30∘, than the ones that do not.

  13. Representing videos in tangible products

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner; Weiting, Ralf

    2014-03-01

    Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used, the number of images extracted out of the video in order to represent the video, the positions in the book and different design strategies compared to regular books.

  14. Development and validation of burnup dependent computational schemes for the analysis of assemblies with advanced lattice codes

    NASA Astrophysics Data System (ADS)

    Ramamoorthy, Karthikeyan

    The main aim of this research is the development and validation of computational schemes for advanced lattice codes. The advanced lattice code which forms the primary part of this research is "DRAGON Version4". The code has unique features like self shielding calculation with capabilities to represent distributed and mutual resonance shielding effects, leakage models with space-dependent isotropic or anisotropic streaming effect, availability of the method of characteristics (MOC), burnup calculation with reaction-detailed energy production etc. Qualified reactor physics codes are essential for the study of all existing and envisaged designs of nuclear reactors. Any new design would require a thorough analysis of all the safety parameters and burnup dependent behaviour. Any reactor physics calculation requires the estimation of neutron fluxes in various regions of the problem domain. The calculation goes through several levels before the desired solution is obtained. Each level of the lattice calculation has its own significance and any compromise at any step will lead to poor final result. The various levels include choice of nuclear data library and energy group boundaries into which the multigroup library is cast; self shielding of nuclear data depending on the heterogeneous geometry and composition; tracking of geometry, keeping error in volume and surface to an acceptable minimum; generation of regionwise and groupwise collision probabilities or MOC-related information and their subsequent normalization thereof, solution of transport equation using the previously generated groupwise information and obtaining the fluxes and reaction rates in various regions of the lattice; depletion of fuel and of other materials based on normalization with constant power or constant flux. Of the above mentioned levels, the present research will mainly focus on two aspects, namely self shielding and depletion. The behaviour of the system is determined by composition of resonant

  15. High Resolution, High Frame Rate Video Technology

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Papers and working group summaries presented at the High Resolution, High Frame Rate Video (HHV) Workshop are compiled. HHV system is intended for future use on the Space Shuttle and Space Station Freedom. The Workshop was held for the dual purpose of: (1) allowing potential scientific users to assess the utility of the proposed system for monitoring microgravity science experiments; and (2) letting technical experts from industry recommend improvements to the proposed near-term HHV system. The following topics are covered: (1) State of the art in the video system performance; (2) Development plan for the HHV system; (3) Advanced technology for image gathering, coding, and processing; (4) Data compression applied to HHV; (5) Data transmission networks; and (6) Results of the users' requirements survey conducted by NASA.

  16. Applicability of BWR SFD experiments and codes for advanced core component designs

    SciTech Connect

    Ott, L.J.

    1997-12-01

    Prior to the DF-4 boiling water reactor (BWR) severe fuel damage (SFD) experiment conducted at the Sandia National Laboratories (SNL) in 1986, no experimental database existed for guidance in modeling core component behavior under postulated severe accident conditions in commercial BWRs. This paper presents the lessons learned from the DF-4 experiment (and subsequent German CORA BWR SFD tests) and the impact on core on of SFD code.

  17. Preface: Recent Advances in Modeling Multiphase Flow and Transportwith the TOUGH Family of Codes

    SciTech Connect

    Liu, Hui-Hai; Illangasekare, Tissa H.

    2007-11-15

    A symposium on research carried out using the TOUGH family of numerical codes was held from May 15 to 17, 2006, at the Lawrence Berkeley National Laboratory. This special issue of the 'Vadose Zone Journal' contains revised and expanded versions of a selected set of papers presented at this symposium (TOUGH Symposium 2006; http://esd.lbl.gov/TOUGHsymposium), all of which focus on multiphase flow, including flow in the vadose zone.

  18. Validation and verification of RELAP5 for Advanced Neutron Source accident analysis: Part I, comparisons to ANSDM and PRSDYN codes

    SciTech Connect

    Chen, N.C.J.; Ibn-Khayat, M.; March-Leuba, J.A.; Wendel, M.W.

    1993-12-01

    As part of verification and validation, the Advanced Neutron Source reactor RELAP5 system model was benchmarked by the Advanced Neutron Source dynamic model (ANSDM) and PRSDYN models. RELAP5 is a one-dimensional, two-phase transient code, developed by the Idaho National Engineering Laboratory for reactor safety analysis. Both the ANSDM and PRSDYN models use a simplified single-phase equation set to predict transient thermal-hydraulic performance. Brief descriptions of each of the codes, models, and model limitations were included. Even though comparisons were limited to single-phase conditions, a broad spectrum of accidents was benchmarked: a small loss-of-coolant-accident (LOCA), a large LOCA, a station blackout, and a reactivity insertion accident. The overall conclusion is that the three models yield similar results if the input parameters are the same. However, ANSDM does not capture pressure wave propagation through the coolant system. This difference is significant in very rapid pipe break events. Recommendations are provided for further model improvements.

  19. Nuclear Energy Advanced Modeling and Simulation (NEAMS) Waste Integrated Performance and Safety Codes (IPSC) : FY10 development and integration.

    SciTech Connect

    Criscenti, Louise Jacqueline; Sassani, David Carl; Arguello, Jose Guadalupe, Jr.; Dewers, Thomas A.; Bouchard, Julie F.; Edwards, Harold Carter; Freeze, Geoffrey A.; Wang, Yifeng; Schultz, Peter Andrew

    2011-02-01

    This report describes the progress in fiscal year 2010 in developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with robust verification, validation, and software quality requirements. Waste IPSC activities in fiscal year 2010 focused on specifying a challenge problem to demonstrate proof of concept, developing a verification and validation plan, and performing an initial gap analyses to identify candidate codes and tools to support the development and integration of the Waste IPSC. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. This year-end progress report documents the FY10 status of acquisition, development, and integration of thermal-hydrologic-chemical-mechanical (THCM) code capabilities, frameworks, and enabling tools and infrastructure.

  20. Live Video and IP-TV

    NASA Astrophysics Data System (ADS)

    Merani, Maria Luisa; Saladino, Daniela

    This Chapter aims at providing a comprehensive insight into the most recent advances in the field of P2P architectures for video broadcasting, focusing on live video streaming P2P live video streaming . After introducing a classification of P2P video solutions, the first part of the Chapter provides an overview of the most interesting P2P IP-TV P2P IP-TV systems currently available over the Internet. It also concentrates on the process of data diffusion within the P2P overlay and complements this view with some measurements that highlight the most salient features of P2P architectures. The second part of the Chapter completes the view, bringing up the modeling efforts to capture the main characteristics and limits of P2P streaming systems, both analytically and numerically. The Chapter is closed by a pristine look at some challenging, open questions, with a specific emphasis on the adoption of network coding in P2P streaming solutions.

  1. Artificial Video for Video Analysis

    ERIC Educational Resources Information Center

    Gallis, Michael R.

    2010-01-01

    This paper discusses the use of video analysis software and computer-generated animations for student activities. The use of artificial video affords the opportunity for students to study phenomena for which a real video may not be easy or even possible to procure, using analysis software with which the students are already familiar. We will…

  2. Development and validation of burnup dependent computational schemes for the analysis of assemblies with advanced lattice codes

    NASA Astrophysics Data System (ADS)

    Ramamoorthy, Karthikeyan

    The main aim of this research is the development and validation of computational schemes for advanced lattice codes. The advanced lattice code which forms the primary part of this research is "DRAGON Version4". The code has unique features like self shielding calculation with capabilities to represent distributed and mutual resonance shielding effects, leakage models with space-dependent isotropic or anisotropic streaming effect, availability of the method of characteristics (MOC), burnup calculation with reaction-detailed energy production etc. Qualified reactor physics codes are essential for the study of all existing and envisaged designs of nuclear reactors. Any new design would require a thorough analysis of all the safety parameters and burnup dependent behaviour. Any reactor physics calculation requires the estimation of neutron fluxes in various regions of the problem domain. The calculation goes through several levels before the desired solution is obtained. Each level of the lattice calculation has its own significance and any compromise at any step will lead to poor final result. The various levels include choice of nuclear data library and energy group boundaries into which the multigroup library is cast; self shielding of nuclear data depending on the heterogeneous geometry and composition; tracking of geometry, keeping error in volume and surface to an acceptable minimum; generation of regionwise and groupwise collision probabilities or MOC-related information and their subsequent normalization thereof, solution of transport equation using the previously generated groupwise information and obtaining the fluxes and reaction rates in various regions of the lattice; depletion of fuel and of other materials based on normalization with constant power or constant flux. Of the above mentioned levels, the present research will mainly focus on two aspects, namely self shielding and depletion. The behaviour of the system is determined by composition of resonant

  3. Wireless medical ultrasound video transmission through noisy channels.

    PubMed

    Panayides, A; Pattichis, M S; Pattichis, C S

    2008-01-01

    Recent advances in video compression such as the current state-of-the-art H.264/AVC standard in conjunction with increasingly available bitrate through new technologies like 3G, and WiMax have brought mobile health (m-Health) healthcare systems and services closer to reality. Despite this momentum towards m-Health systems and especially e-Emergency systems, wireless channels remain error prone, while the absence of objective quality metrics limits the ability of providing medical video of adequate diagnostic quality at a required bitrate. In this paper we investigate different encoding schemes and loss rates in medical ultrasound video transmission and come to conclusions involving efficiency, the trade-off between bitrate and quality, while we highlight the relationship linking video quality and the error ratio of corrupted P and B frames. More specifically, we investigate IPPP, IBPBP and IBBPBBP coding structures under packet loss rates of 2%, 5%, 8% and 10% and derive that the latter attains higher SNR ratings in all tested cases. A preliminary clinical evaluation shows that for SNR ratings higher than 30 db, video diagnostic quality may be adequate, while above 30.5 db the diagnostic information available in the reconstructed ultrasound video is close to that of the original. PMID:19163920

  4. Comparative assessment of H.265/MPEG-HEVC, VP9, and H.264/MPEG-AVC encoders for low-delay video applications

    NASA Astrophysics Data System (ADS)

    Grois, Dan; Marpe, Detlev; Nguyen, Tung; Hadar, Ofer

    2014-09-01

    The popularity of low-delay video applications dramatically increased over the last years due to a rising demand for realtime video content (such as video conferencing or video surveillance), and also due to the increasing availability of relatively inexpensive heterogeneous devices (such as smartphones and tablets). To this end, this work presents a comparative assessment of the two latest video coding standards: H.265/MPEG-HEVC (High-Efficiency Video Coding), H.264/MPEG-AVC (Advanced Video Coding), and also of the VP9 proprietary video coding scheme. For evaluating H.264/MPEG-AVC, an open-source x264 encoder was selected, which has a multi-pass encoding mode, similarly to VP9. According to experimental results, which were obtained by using similar low-delay configurations for all three examined representative encoders, it was observed that H.265/MPEG-HEVC provides significant average bit-rate savings of 32.5%, and 40.8%, relative to VP9 and x264 for the 1-pass encoding, and average bit-rate savings of 32.6%, and 42.2% for the 2-pass encoding, respectively. On the other hand, compared to the x264 encoder, typical low-delay encoding times of the VP9 encoder, are about 2,000 times higher for the 1-pass encoding, and are about 400 times higher for the 2-pass encoding.

  5. A one- and two-dimensional cross-section sensitivity and uncertainty path of the AARE (Advanced Analysis for Reactor Engineering) modular code system

    SciTech Connect

    Davidson, J.W.; Dudziak, D.J.; Higgs, C.E.; Stepanek, J.

    1988-01-01

    AARE, a code package to perform Advanced Analysis for Reactor Engineering, is a linked modular system for fission reactor core and shielding, as well as fusion blanket, analysis. Its cross-section sensitivity and uncertainty path presently includes the cross-section processing and reformatting code TRAMIX, cross-section homogenization and library reformatting code MIXIT, the 1-dimensional transport code ONEDANT, the 2-dimensional transport code TRISM, and the 1- and 2- dimensional cross-section sensitivity and uncertainty code SENSIBL. IN the present work, a short description of the whole AARE system is given, followed by a detailed description of the cross-section sensitivity and uncertainty path. 23 refs., 2 figs.

  6. Immersive video

    NASA Astrophysics Data System (ADS)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  7. Minimally invasive (robotic assisted thoracic surgery and video-assisted thoracic surgery) lobectomy for the treatment of locally advanced non-small cell lung cancer

    PubMed Central

    Yang, Hao-Xian; Woo, Kaitlin M.; Sima, Camelia S.

    2016-01-01

    Background Insufficient data exist on the results of minimally invasive surgery (MIS) for locally advanced non-small cell lung cancer (NSCLC) traditionally approached by thoracotomy. The use of telerobotic surgical systems may allow for greater utilization of MIS approaches to locally advanced disease. We will review the existing literature on MIS for locally advanced disease and briefly report on the results of a recent study conducted at our institution. Methods We performed a retrospective review of a prospective single institution database to identify patients with clinical stage II and IIIA NSCLC who underwent lobectomy following induction chemotherapy. The patients were classified into two groups (MIS and thoracotomy) and were compared for differences in outcomes and survival. Results From January 2002 to December 2013, 428 patients {397 thoracotomy, 31 MIS [17 robotic and 14 video-assisted thoracic surgery (VATS)]} underwent induction chemotherapy followed by lobectomy. The conversion rate in the MIS group was 26% (8/31) The R0 resection rate was similar between the groups (97% for MIS vs. 94% for thoracotomy; P=0.71), as was postoperative morbidity (32% for MIS vs. 33% for thoracotomy; P=0.99). The median length of hospital stay was shorter in the MIS group (4 vs. 5 days; P<0.001). The 3-year overall survival (OS) was 48.3% in the MIS group and 56.6% in the thoracotomy group (P=0.84); the corresponding 3-year DFS were 49.0% and 42.1% (P=0.19). Conclusions In appropriately selected patients with NSCLC, MIS approaches to lobectomy following induction therapy are feasible and associated with similar disease-free and OS to those following thoracotomy. PMID:27195138

  8. Euler Technology Assessment - SPLITFLOW Code Applications for Stability and Control Analysis on an Advanced Fighter Model Employing Innovative Control Concepts

    NASA Technical Reports Server (NTRS)

    Jordan, Keith J.

    1998-01-01

    This report documents results from the NASA-Langley sponsored Euler Technology Assessment Study conducted by Lockheed-Martin Tactical Aircraft Systems (LMTAS). The purpose of the study was to evaluate the ability of the SPLITFLOW code using viscous and inviscid flow models to predict aerodynamic stability and control of an advanced fighter model. The inviscid flow model was found to perform well at incidence angles below approximately 15 deg, but not as well at higher angles of attack. The results using a turbulent, viscous flow model matched the trends of the wind tunnel data, but did not show significant improvement over the Euler solutions. Overall, the predictions were found to be useful for stability and control design purposes.

  9. Advanced modulation technology development for earth station demodulator applications. Coded modulation system development

    NASA Technical Reports Server (NTRS)

    Miller, Susan P.; Kappes, J. Mark; Layer, David H.; Johnson, Peter N.

    1990-01-01

    A jointly optimized coded modulation system is described which was designed, built, and tested by COMSAT Laboratories for NASA LeRC which provides a bandwidth efficiency of 2 bits/s/Hz at an information rate of 160 Mbit/s. A high speed rate 8/9 encoder with a Viterbi decoder and an Octal PSK modem are used to achieve this. The BER performance is approximately 1 dB from the theoretically calculated value for this system at a BER of 5 E-7 under nominal conditions. The system operates in burst mode for downlink applications and tests have demonstrated very little degradation in performance with frequency and level offset. Unique word miss rate measurements were conducted which demonstrate reliable acquisition at low values of Eb/No. Codec self tests have verified the performance of this subsystem in a stand alone mode. The codec is capable of operation at a 200 Mbit/s information rate as demonstrated using a codec test set which introduces noise digitally. The measured performance is within 0.2 dB of the computer simulated predictions. A gate array implementation of the most time critical element of the high speed Viterbi decoder was completed. This gate array add-compare-select chip significantly reduces the power consumption and improves the manufacturability of the decoder. This chip has general application in the implementation of high speed Viterbi decoders.

  10. Virtual Space Camp Video Game

    NASA Astrophysics Data System (ADS)

    Speyerer, E. J.; Ferrari, K. A.; Lowes, L. L.; Raad, P. E.; Cuevas, T.; Purdy, J. A.

    2006-03-01

    With advances in computers, graphics, and especially video games, manned space exploration can become real, by creating a safe, fun learning environment that allows players to explore the solar system from the comfort of their personal computers.

  11. Dashboard Videos

    NASA Astrophysics Data System (ADS)

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-11-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his Lab Out Loud blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing this website and video, I decided to create my own dashboard videos to show to my high school physics students. I have produced and synchronized 12 separate dashboard videos, each about 10 minutes in length, driving around the city of Lawrence, KS, and Douglas County, and posted them to a website.2 Each video reflects different types of driving: both positive and negative accelerations and constant speeds. As shown in Fig. 1, I was able to capture speed, distance, and miles per gallon from my dashboard instrumentation. By linking this with a stopwatch, each of these quantities can be graphed with respect to time. I anticipate and hope that teachers will find these useful in their own classrooms, i.e., having physics students watch the videos and create their own motion maps (distance-time, speed-time) for study.

  12. Picturing Video

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Video Pics is a software program that generates high-quality photos from video. The software was developed under an SBIR contract with Marshall Space Flight Center by Redhawk Vision, Inc.--a subsidiary of Irvine Sensors Corporation. Video Pics takes information content from multiple frames of video and enhances the resolution of a selected frame. The resulting image has enhanced sharpness and clarity like that of a 35 mm photo. The images are generated as digital files and are compatible with image editing software.

  13. Video Compression

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Optivision developed two PC-compatible boards and associated software under a Goddard Space Flight Center Small Business Innovation Research grant for NASA applications in areas such as telerobotics, telesciences and spaceborne experimentation. From this technology, the company used its own funds to develop commercial products, the OPTIVideo MPEG Encoder and Decoder, which are used for realtime video compression and decompression. They are used in commercial applications including interactive video databases and video transmission. The encoder converts video source material to a compressed digital form that can be stored or transmitted, and the decoder decompresses bit streams to provide high quality playback.

  14. Subjective evaluation of H.265/HEVC based dynamic adaptive video streaming over HTTP (HEVC-DASH)

    NASA Astrophysics Data System (ADS)

    Irondi, Iheanyi; Wang, Qi; Grecos, Christos

    2015-02-01

    The Dynamic Adaptive Streaming over HTTP (DASH) standard is becoming increasingly popular for real-time adaptive HTTP streaming of internet video in response to unstable network conditions. Integration of DASH streaming techniques with the new H.265/HEVC video coding standard is a promising area of research. The performance of HEVC-DASH systems has been previously evaluated by a few researchers using objective metrics, however subjective evaluation would provide a better measure of the user's Quality of Experience (QoE) and overall performance of the system. This paper presents a subjective evaluation of an HEVC-DASH system implemented in a hardware testbed. Previous studies in this area have focused on using the current H.264/AVC (Advanced Video Coding) or H.264/SVC (Scalable Video Coding) codecs and moreover, there has been no established standard test procedure for the subjective evaluation of DASH adaptive streaming. In this paper, we define a test plan for HEVC-DASH with a carefully justified data set employing longer video sequences that would be sufficient to demonstrate the bitrate switching operations in response to various network condition patterns. We evaluate the end user's real-time QoE online by investigating the perceived impact of delay, different packet loss rates, fluctuating bandwidth, and the perceived quality of using different DASH video stream segment sizes on a video streaming session using different video sequences. The Mean Opinion Score (MOS) results give an insight into the performance of the system and expectation of the users. The results from this study show the impact of different network impairments and different video segments on users' QoE and further analysis and study may help in optimizing system performance.

  15. Improved NASA-ANOPP Noise Prediction Computer Code for Advanced Subsonic Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Kontos, K. B.; Janardan, B. A.; Gliebe, P. R.

    1996-01-01

    Recent experience using ANOPP to predict turbofan engine flyover noise suggests that it over-predicts overall EPNL by a significant amount. An improvement in this prediction method is desired for system optimization and assessment studies of advanced UHB engines. An assessment of the ANOPP fan inlet, fan exhaust, jet, combustor, and turbine noise prediction methods is made using static engine component noise data from the CF6-8OC2, E(3), and QCSEE turbofan engines. It is shown that the ANOPP prediction results are generally higher than the measured GE data, and that the inlet noise prediction method (Heidmann method) is the most significant source of this overprediction. Fan noise spectral comparisons show that improvements to the fan tone, broadband, and combination tone noise models are required to yield results that more closely simulate the GE data. Suggested changes that yield improved fan noise predictions but preserve the Heidmann model structure are identified and described. These changes are based on the sets of engine data mentioned, as well as some CFM56 engine data that was used to expand the combination tone noise database. It should be noted that the recommended changes are based on an analysis of engines that are limited to single stage fans with design tip relative Mach numbers greater than one.

  16. Analysis of two-phase flow phenomena with FLUENT-4 code in the experiments for advanced light water reactor safety

    SciTech Connect

    Miettinen, J.; Tuomainen, M.; Karppinen, I.; Tuunanen, J.

    2002-07-01

    In the development of advanced light water reactors, thermohydraulic phenomena are versatile in comparison with the present concepts. The new features are the passive safety systems, where energy transport takes place by natural circulation instead of forced flow. For cooling of the molten core, new concepts have been created including external vessel cooling and core catchers. In all new concepts, two-phase flow circulation patterns exist. The calculational tools should be capable of analysing multidimensional circulation created by the gravity field instead of the forced pump circulation. In spite of extensive model development for the one-dimensional Eulerian solutions for two-phase flow, multidimensional calculation is still a great challenge. The momentum transfer terms and turbulence models for the two-phase flow still require large efforts, although the turbulence models for the single phase flow are versatile and rather advanced at present. Two-phase models exist already now in several CFD codes. In VTT, most experience has been achieved with Fluent-4 Fluent-5 and at last Fluent-6 codes. Fluent-4 and Fluent-6 have the Euler-Euler solution for two-phase conservation equations, which is required for the flow conditions, where the volume fraction of both liquid and gas phases is important and the flow circulation is largely created by the gravity field. VTT is participating in several experimental projects on ALWRs, where multidimensional two-phase circulation is essential. This paper presents three examples of the use of CFD codes for analyses of ALWRs. The first example is connected with SWR 1000 reactor form Framatome ANP. Framatome ANP is performing experiments for evaluation of external cooling of the Reactor Pressure Vessel (RPV) of SWR 1000. The experiments are aimed for determining the limits to avoid critical heat fluxes (CHFs). The experimental programme is carried out in three steps. The first part, the air-water experiments, has been analysed at

  17. Innovative Video Diagnostic Equipment for Material Science

    NASA Technical Reports Server (NTRS)

    Capuano, G.; Titomanlio, D.; Soellner, W.; Seidel, A.

    2012-01-01

    Materials science experiments under microgravity increasingly rely on advanced optical systems to determine the physical properties of the samples under investigation. This includes video systems with high spatial and temporal resolution. The acquisition, handling, storage and transmission to ground of the resulting video data are very challenging. Since the available downlink data rate is limited, the capability to compress the video data significantly without compromising the data quality is essential. We report on the development of a Digital Video System (DVS) for EML (Electro Magnetic Levitator) which provides real-time video acquisition, high compression using advanced Wavelet algorithms, storage and transmission of a continuous flow of video with different characteristics in terms of image dimensions and frame rates. The DVS is able to operate with the latest generation of high-performance cameras acquiring high resolution video images up to 4Mpixels@60 fps or high frame rate video images up to about 1000 fps@512x512pixels.

  18. Speech coding

    NASA Astrophysics Data System (ADS)

    Gersho, Allen

    1990-05-01

    Recent advances in algorithms and techniques for speech coding now permit high quality voice reproduction at remarkably low bit rates. The advent of powerful single-ship signal processors has made it cost effective to implement these new and sophisticated speech coding algorithms for many important applications in voice communication and storage. Some of the main ideas underlying the algorithms of major interest today are reviewed. The concept of removing redundancy by linear prediction is reviewed, first in the context of predictive quantization or DPCM. Then linear predictive coding, adaptive predictive coding, and vector quantization are discussed. The concepts of excitation coding via analysis-by-synthesis, vector sum excitation codebooks, and adaptive postfiltering are explained. The main idea of vector excitation coding (VXC) or code excited linear prediction (CELP) are presented. Finally low-delay VXC coding and phonetic segmentation for VXC are described.

  19. System Synchronizes Recordings from Separated Video Cameras

    NASA Technical Reports Server (NTRS)

    Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.

    2009-01-01

    A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.

  20. Video voyeurs and the covert videotaping of unsuspecting victims: psychological and legal consequences.

    PubMed

    Simon, R I

    1997-09-01

    Video voyeurs employ state of the art technology to gain access into the most private places where victims are covertly videotaped. Women are the usual victims of video voyeurs as they change their clothes, perform natural functions or engage in sexual activities. When the videotaping is discovered by the victim, serious psychological harm may result. A civil suit is the most common legal remedy sought. Criminal sanctions, when available, are often insufficient compared to the seriousness of the crime. While unauthorized, covert audiotaping is forbidden by both federal and state codes, videotaping is often not specifically mentioned. It appears that legislators do not fully appreciate the burgeoning of covert videotaping, the technological advances that have greatly expanded the possibilities for voyeuristic viewing and the harm done to victims of video voyeurs. Appropriate criminal sanctions need to be included in privacy statutes for unauthorized, video surveillance with or without accompanying audio transcription. PMID:9304836

  1. MHD Simulation of Magnetic Nozzle Plasma with the NIMROD Code: Applications to the VASIMR Advanced Space Propulsion Concept

    NASA Astrophysics Data System (ADS)

    Tarditi, Alfonso G.; Shebalin, John V.

    2002-11-01

    A simulation study with the NIMROD code [1] is being carried on to investigate the efficiency of the thrust generation process and the properties of the plasma detachment in a magnetic nozzle. In the simulation, hot plasma is injected in the magnetic nozzle, modeled as a 2D, axi-symmetric domain. NIMROD has two-fluid, 3D capabilities but the present runs are being conducted within the MHD, 2D approximation. As the plasma travels through the magnetic field, part of its thermal energy is converted into longitudinal kinetic energy, along the axis of the nozzle. The plasma eventually detaches from the magnetic field at a certain distance from the nozzle throat where the kinetic energy becomes larger than the magnetic energy. Preliminary NIMROD 2D runs have been benchmarked with a particle trajectory code showing satisfactory results [2]. Further testing is here reported with the emphasis on the analysis of the diffusion rate across the field lines and of the overall nozzle efficiency. These simulation runs are specifically designed for obtaining comparisons with laboratory measurements of the VASIMR experiment, by looking at the evolution of the radial plasma density and temperature profiles in the nozzle. VASIMR (Variable Specific Impulse Magnetoplasma Rocket, [3]) is an advanced space propulsion concept currently under experimental development at the Advanced Space Propulsion Laboratory, NASA Johnson Space Center. A plasma (typically ionized Hydrogen or Helium) is generated by a RF (Helicon) discharge and heated by an Ion Cyclotron Resonance Heating antenna. The heated plasma is then guided into a magnetic nozzle to convert the thermal plasma energy into effective thrust. The VASIMR system has no electrodes and a solenoidal magnetic field produced by an asymmetric mirror configuration ensures magnetic insulation of the plasma from the material surfaces. By powering the plasma source and the heating antenna at different levels it is possible to vary smoothly of the

  2. Naval threat countermeasure simulator and the IR_CRUISE_missiles models for the generation of infrared (IR) videos of maritime targets and background for input into advanced imaging IR seekers

    NASA Astrophysics Data System (ADS)

    Taczak, Thomas M.; Dries, John W.; Gover, Robert E.; Snapp, Mary Ann; Williams, Elmer F.; Cahill, Colin P.

    2002-07-01

    A new hardware-in-the-loop modeling technique was developed at the US Naval Research Laboratory (NRL) for the evaluation of IR countermeasures against advanced IR imaging anti-ship cruise missiles. The research efforts involved the creation of tools to generate accurate IR imagery and synthesize video to inject in to real-world threat simulators. A validation study was conducted to verify the accuracy and limitations of the techniques that were developed.

  3. Watermarking textures in video games

    NASA Astrophysics Data System (ADS)

    Liu, Huajian; Berchtold, Waldemar; Schäfer, Marcel; Lieb, Patrick; Steinebach, Martin

    2014-02-01

    Digital watermarking is a promising solution to video game piracy. In this paper, based on the analysis of special challenges and requirements in terms of watermarking textures in video games, a novel watermarking scheme for DDS textures in video games is proposed. To meet the performance requirements in video game applications, the proposed algorithm embeds the watermark message directly in the compressed stream in DDS files and can be straightforwardly applied in watermark container technique for real-time embedding. Furthermore, the embedding approach achieves high watermark payload to handle collusion secure fingerprinting codes with extreme length. Hence, the scheme is resistant to collusion attacks, which is indispensable in video game applications. The proposed scheme is evaluated in aspects of transparency, robustness, security and performance. Especially, in addition to classical objective evaluation, the visual quality and playing experience of watermarked games is assessed subjectively in game playing.

  4. Video games.

    PubMed

    Funk, Jeanne B

    2005-06-01

    The video game industry insists that it is doing everything possible to provide information about the content of games so that parents can make informed choices; however, surveys indicate that ratings may not reflect consumer views of the nature of the content. This article describes some of the currently popular video games, as well as developments that are on the horizon, and discusses the status of research on the positive and negative impacts of playing video games. Recommendations are made to help parents ensure that children play games that are consistent with their values. PMID:16111624

  5. Temporal Coding of Volumetric Imagery

    NASA Astrophysics Data System (ADS)

    Llull, Patrick Ryan

    'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption. This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications. Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level. Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration

  6. Video flowmeter

    DOEpatents

    Lord, D.E.; Carter, G.W.; Petrini, R.R.

    1983-08-02

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid containing entrained particles is formed and positioned by a rod optic lens assembly on the raster area of a low-light level television camera. The particles are illuminated by light transmitted through a bundle of glass fibers surrounding the rod optic lens assembly. Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen. The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid. 4 figs.

  7. Video flowmeter

    DOEpatents

    Lord, D.E.; Carter, G.W.; Petrini, R.R.

    1981-06-10

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid containing entrained particles is formed and positioned by a rod optic lens assembly on the raster area of a low-light level television camera. The particles are illuminated by light transmitted through a bundle of glass fibers surrounding the rod optic lens assembly. Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen. The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid.

  8. Video flowmeter

    DOEpatents

    Lord, David E.; Carter, Gary W.; Petrini, Richard R.

    1983-01-01

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid (10) containing entrained particles (12) is formed and positioned by a rod optic lens assembly (31) on the raster area of a low-light level television camera (20). The particles (12) are illuminated by light transmitted through a bundle of glass fibers (32) surrounding the rod optic lens assembly (31). Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen (40). The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid (10).

  9. Beam simulation and radiation dose calculation at the Advanced Photon Source with shower, an Interface Program to the EGS4 code system

    SciTech Connect

    Emery, L.

    1995-07-01

    The interface program shower to the FGS Monte Carlo electromagnetic cascade shower simulation code system was written to facilitate the definition of complicated target and shielding geometries and to simplify the handling of input and output of data. The geometry is defined by a series of namelist commands in an input file. The input and output beam data files follow the SPDDS (self-describing data set) protocol, which makes the files compatible with other physics codes that follow the same protocol. For instance, one can use the results of the cascade shower simulation as the input data for an accelerator tracking code. The shower code has also been used to calculate the bremsstrahlung component of radiation doses for possible beam loss scenarios at the Advanced Photon Source (APS) at Argonne National Laboratory.

  10. Joint Video Summarization and Transmission Adaptation for Energy-Efficient Wireless Video Streaming

    NASA Astrophysics Data System (ADS)

    Li, Zhu; Zhai, Fan; Katsaggelos, Aggelos K.

    2008-12-01

    The deployment of the higher data rate wireless infrastructure systems and the emerging convergence of voice, video, and data services have been driving various modern multimedia applications, such as video streaming and mobile TV. However, the greatest challenge for video transmission over an uplink multiaccess wireless channel is the limited channel bandwidth and battery energy of a mobile device. In this paper, we pursue an energy-efficient video communication solution through joint video summarization and transmission adaptation over a slow fading wireless channel. Video summarization, coding and modulation schemes, and packet transmission are optimally adapted to the unique packet arrival and delay characteristics of the video summaries. In addition to the optimal solution, we also propose a heuristic solution that has close-to-optimal performance. Operational energy efficiency versus video distortion performance is characterized under a summarization setting. Simulation results demonstrate the advantage of the proposed scheme in energy efficiency and video transmission quality.

  11. Magnetic Braking: A Video Analysis

    NASA Astrophysics Data System (ADS)

    Molina-Bolívar, J. A.; Abella-Palacios, A. J.

    2012-10-01

    This paper presents a laboratory exercise that introduces students to the use of video analysis software and the Lenz's law demonstration. Digital techniques have proved to be very useful for the understanding of physical concepts. In particular, the availability of affordable digital video offers students the opportunity to actively engage in kinematics in introductory-level physics.1,2 By using digital videos frame advance features and "marking" the position of a moving object in each frame, students are able to more precisely determine the position of an object at much smaller time increments than would be possible with common time devices. Once the student collects data consisting of positions and times, these values may be manipulated to determine velocity and acceleration. There are a variety of commercial and free applications that can be used for video analysis. Because the relevant technology has become inexpensive, video analysis has become a prevalent tool in introductory physics courses.

  12. High expression of CAI2, a 9p21-embedded long non-coding RNA, contributes to advanced stage neuroblastoma

    PubMed Central

    Barnhill, Lisa M.; Williams, Richard T.; Cohen, Olga; Kim, Youngjin; Batova, Ayse; Mielke, Jenna A.; Messer, Karen; Pu, Minya; Bao, Lei; Yu, Alice L.; Diccianni, Mitchell B.

    2014-01-01

    Neuroblastoma is a pediatric cancer with significant genomic and biological heterogeneity. p16 and ARF, two important tumor suppressor genes on chromosome 9p21, are inactivated commonly in most cancers but paradoxically overexpressed in neuroblastoma. Here we report that exon γ in p16 is also part of an undescribed long non-coding RNA (lncRNA) that we have termed CAI2 (CDKN2A/ARF Intron 2 lncRNA). CAI2 is a single exon gene with a poly A signal located in but independent of the p16/ARF exon 3. CAI2 is expressed at very low levels in normal tissue but is highly expressed in most tumor cell lines with an intact 9p21 locus. Concordant expression of CAI2 with p16 and ARF in normal tissue along with the ability of CAI2 to induce p16 expression suggested that CAI2 may regulate p16 and/or ARF. In neuroblastoma cells transformed by serial passage in vitro, leading to more rapid proliferation, CAI2, p16 and ARF expression all increased dramatically. A similar relationship was also observed in primary neuroblastomas where CAI2 expression was significantly higher in advanced stage neuroblastoma, independently of MYCN amplification. Consistent with its association with high risk disease, CAI2 expression was also significantly associated with poor clinical outcomes, although this effect was reduced when adjusted for MYCN amplification. Taken together, our findings suggested that CAI2 contributes to the paradoxical overexpression of p16 in neuroblastoma, where CAI2 may offer a useful biomarker of high-risk disease. PMID:25028366

  13. Challenge problem and milestones for : Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC).

    SciTech Connect

    Freeze, Geoffrey A.; Wang, Yifeng; Howard, Robert; McNeish, Jerry A.; Schultz, Peter Andrew; Arguello, Jose Guadalupe, Jr.

    2010-09-01

    This report describes the specification of a challenge problem and associated challenge milestones for the Waste Integrated Performance and Safety Codes (IPSC) supporting the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The NEAMS challenge problems are designed to demonstrate proof of concept and progress towards IPSC goals. The goal of the Waste IPSC is to develop an integrated suite of modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with robust verification, validation, and software quality requirements. To demonstrate proof of concept and progress towards these goals and requirements, a Waste IPSC challenge problem is specified that includes coupled thermal-hydrologic-chemical-mechanical (THCM) processes that describe (1) the degradation of a borosilicate glass waste form and the corresponding mobilization of radionuclides (i.e., the processes that produce the radionuclide source term), (2) the associated near-field physical and chemical environment for waste emplacement within a salt formation, and (3) radionuclide transport in the near field (i.e., through the engineered components - waste form, waste package, and backfill - and the immediately adjacent salt). The initial details of a set of challenge milestones that collectively comprise the full challenge problem are also specified.

  14. Advanced Microsensors

    NASA Technical Reports Server (NTRS)

    1991-01-01

    This video looks at a spinoff application of the technology from advanced microsensors -- those that monitor and determine conditions of spacecraft like the Space Shuttle. The application featured is concerned with the monitoring of the health of premature babies.

  15. Profile video

    NASA Astrophysics Data System (ADS)

    Voglewede, Paul E.; Zampieron, Jeffrey

    2009-05-01

    For unattended persistent surveillance there is a need for a system which provides the following information: target classification, target quantity estimate, cargo presence and characterization, direction of travel, and action. Over highly bandwidth restricted links, such as Iridium, SATCOM or HF, the data rates of common techniques are too high, even after aggressive compression, to deliver the required intelligence in a timely, low power manner. We propose the following solution to this data rate problem: Profile Video. Profile video is a new technique which provides all of the required information in a very low data-rate package.

  16. Video May Aid End-of-Life Decision-Making

    MedlinePlus

    ... fullstory_159659.html Video May Aid End-of-Life Decision-Making Brief film helped heart failure patients ... HealthDay News) -- Watching a video about end-of-life care options may help patients with advanced heart ...

  17. Comparison of the 3-D Deterministic Neutron Transport Code Attila® To Measure Data, MCNP And MCNPX For The Advanced Test Reactor

    SciTech Connect

    D. Scott Lucas; D. S. Lucas

    2005-09-01

    An LDRD (Laboratory Directed Research and Development) project is underway at the Idaho National Laboratory (INL) to apply the three-dimensional multi-group deterministic neutron transport code (Attila®) to criticality, flux and depletion calculations of the Advanced Test Reactor (ATR). This paper discusses the development of Attila models for ATR, capabilities of Attila, the generation and use of different cross-section libraries, and comparisons to ATR data, MCNP, MCNPX and future applications.

  18. Comparison of the PLTEMP code flow instability predictions with measurements made with electrically heated channels for the advanced test reactor.

    SciTech Connect

    Feldman, E.

    2011-06-09

    When the University of Missouri Research Reactor (MURR) was designed in the 1960s the potential for fuel element burnout by a phenomenon referred to at that time as 'autocatalytic vapor binding' was of serious concern. This type of burnout was observed to occur at power levels considerably lower than those that were known to cause critical heat flux. The conversion of the MURR from HEU fuel to LEU fuel will probably require significant design changes, such as changes in coolant channel thicknesses, that could affect the thermal-hydraulic behavior of the reactor core. Therefore, the redesign of the MURR to accommodate an LEU core must address the same issues of fuel element burnout that were of concern in the 1960s. The Advanced Test Reactor (ATR) was designed at about the same time as the MURR and had similar concerns with regard to fuel element burnout. These concerns were addressed in the ATR by two groups of thermal-hydraulic tests that employed electrically heated simulated fuel channels. The Croft (1964), Reference 1, tests were performed at ANL. The Waters (1966), Reference 2, tests were performed at Hanford Laboratories in Richland Washington. Since fuel element surface temperatures rise rapidly as burnout conditions are approached, channel surface temperatures were carefully monitored in these experiments. For self-protection, the experimental facilities were designed to cut off the electric power when rapidly increasing surface temperatures were detected. In both the ATR reactor and in the tests with electrically heated channels, the heated length of the fuel plate was 48 inches, which is about twice that of the MURR. Whittle and Forgan (1967) independently conducted tests with electrically heated rectangular channels that were similar to the tests by Croft and by Walters. In the Whittle and Forgan tests the heated length of the channel varied among the tests and was between 16 and 24 inches. Both Waters and Whittle and Forgan show that the cause of the

  19. Interactive Video.

    ERIC Educational Resources Information Center

    Boyce, Carol

    1992-01-01

    A workshop on interactive video was designed for fourth and fifth grade students, with the goals of familiarizing students with laser disc technology, developing a cadre of trained students to train other students and staff, and challenging able learners to utilize higher level thinking skills while conducting a research project. (JDD)

  20. SimER: An advanced three-dimensional environmental risk assessment code for contaminated land and radioactive waste disposal applications

    SciTech Connect

    Kwong, S.; Small, J.; Tahar, B.

    2007-07-01

    SimER (Simulations of Environmental Risks) is a powerful performance assessment code developed to undertake assessments of both contaminated land and radioactive waste disposal. The code can undertake both deterministic and probabilistic calculations, and is fully compatible with all available best practice guidance and regulatory requirements. SimER represents the first time-dependent performance assessment code capable of providing a detailed representation of system evolution that is designed specifically to address issues found across UK nuclear sites. The code adopts flexible input language with build-in unit checking to model the whole system (i.e. near-field, geosphere and biosphere) in a single code thus avoiding the need for any time consuming data transfer and the often laborious interface between the different codes. This greatly speeds up the assessment process and has major quality assurance advantages. SimER thus provides a cost-effective tool for undertaking projects involving risk assessment from contaminated land assessments through to full post-closure safety cases and other work supporting key site endpoint decisions. A Windows version (v1.0) of the code was first released in June 2004. The code has subsequently been subject to further testing and development. In particular, Viewers have been developed to provide users with visual information to assist the development of SimER models, and output can now be produced in a format that can be used by the FieldView software to view the results and produce animation from the SimER calculations. More recently a Linux version of the code has been produced to extend coverage to the commonly used platform bases and offer an improved operating environment for probabilistic assessments. Results from the verification of the SimER code for a sample of test cases for both contaminated land and waste disposal applications are presented. (authors)

  1. Advanced GF(32) nonbinary LDPC coded modulation with non-uniform 9-QAM outperforming star 8-QAM.

    PubMed

    Liu, Tao; Lin, Changyu; Djordjevic, Ivan B

    2016-06-27

    In this paper, we first describe a 9-symbol non-uniform signaling scheme based on Huffman code, in which different symbols are transmitted with different probabilities. By using the Huffman procedure, prefix code is designed to approach the optimal performance. Then, we introduce an algorithm to determine the optimal signal constellation sets for our proposed non-uniform scheme with the criterion of maximizing constellation figure of merit (CFM). The proposed nonuniform polarization multiplexed signaling 9-QAM scheme has the same spectral efficiency as the conventional 8-QAM. Additionally, we propose a specially designed GF(32) nonbinary quasi-cyclic LDPC code for the coded modulation system based on the 9-QAM non-uniform scheme. Further, we study the efficiency of our proposed non-uniform 9-QAM, combined with nonbinary LDPC coding, and demonstrate by Monte Carlo simulation that the proposed GF(23) nonbinary LDPC coded 9-QAM scheme outperforms nonbinary LDPC coded uniform 8-QAM by at least 0.8dB. PMID:27410549

  2. Priority-based methods for reducing the impact of packet loss on HEVC encoded video streams

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos

    2013-02-01

    The rapid growth in the use of video streaming over IP networks has outstripped the rate at which new network infrastructure has been deployed. These bandwidth-hungry applications now comprise a significant part of all Internet traffic and present major challenges for network service providers. The situation is more acute in mobile networks where the available bandwidth is often limited. Work towards the standardisation of High Efficiency Video Coding (HEVC), the next generation video coding scheme, is currently on track for completion in 2013. HEVC offers the prospect of a 50% improvement in compression over the current H.264 Advanced Video Coding standard (H.264/AVC) for the same quality. However, there has been very little published research on HEVC streaming or the challenges of delivering HEVC streams in resource-constrained network environments. In this paper we consider the problem of adapting an HEVC encoded video stream to meet the bandwidth limitation in a mobile networks environment. Video sequences were encoded using the Test Model under Consideration (TMuC HM6) for HEVC. Network abstraction layers (NAL) units were packetized, on a one NAL unit per RTP packet basis, and transmitted over a realistic hybrid wired/wireless testbed configured with dynamically changing network path conditions and multiple independent network paths from the streamer to the client. Two different schemes for the prioritisation of RTP packets, based on the NAL units they contain, have been implemented and empirically compared using a range of video sequences, encoder configurations, bandwidths and network topologies. In the first prioritisation method the importance of an RTP packet was determined by the type of picture and the temporal switching point information carried in the NAL unit header. Packets containing parameter set NAL units and video coding layer (VCL) NAL units of the instantaneous decoder refresh (IDR) and the clean random access (CRA) pictures were given the

  3. Real-time data compression of broadcast video signals

    NASA Technical Reports Server (NTRS)

    Shalkauser, Mary Jo W. (Inventor); Whyte, Wayne A., Jr. (Inventor); Barnes, Scott P. (Inventor)

    1991-01-01

    A non-adaptive predictor, a nonuniform quantizer, and a multi-level Huffman coder are incorporated into a differential pulse code modulation system for coding and decoding broadcast video signals in real time.

  4. Real-time data compression of broadcast video signals

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary J. (Inventor); Whyte, Wayne A., Jr. (Inventor); Barnes, Scott P. (Inventor)

    1990-01-01

    A non-adaptive predictor, a nonuniform quantizer, and a multi-level Huffman coder are incorporated into a differential pulse code modulation system for coding and decoding broadcast video signals in real time.

  5. Salient motion features for video quality assessment.

    PubMed

    Ćulibrk, Dubravko; Mirković, Milan; Zlokolica, Vladimir; Pokrić, Maja; Crnojević, Vladimir; Kukolj, Dragan

    2011-04-01

    Design of algorithms that are able to estimate video quality as perceived by human observers is of interest for a number of applications. Depending on the video content, the artifacts introduced by the coding process can be more or less pronounced and diversely affect the quality of videos, as estimated by humans. While it is well understood that motion affects both human attention and coding quality, this relationship has only recently started gaining attention among the research community, when video quality assessment (VQA) is concerned. In this paper, the effect of calculating several objective measure features, related to video coding artifacts, separately for salient motion and other regions of the frames of the sequence is examined. In addition, we propose a new scheme for quality assessment of coded video streams, which takes into account salient motion. Standardized procedure has been used to calculate the Mean Opinion Score (MOS), based on experiments conducted with a group of non-expert observers viewing standard definition (SD) sequences. MOS measurements were taken for nine different SD sequences, coded using MPEG-2 at five different bit-rates. Eighteen different published approaches related to measuring the amount of coding artifacts objectively on a single-frame basis were implemented. Additional features describing the intensity of salient motion in the frames, as well as the intensity of coding artifacts in the salient motion regions were proposed. Automatic feature selection was performed to determine the subset of features most correlated to video quality. The results show that salient-motion-related features enhance prediction and indicate that the presence of blocking effect artifacts and blurring in the salient regions and variance and intensity of temporal changes in non-salient regions influence the perceived video quality. PMID:20876020

  6. An automated, video tape-based image archiving system.

    PubMed

    Vesely, I; Eickmeier, B; Campbell, G

    1991-01-01

    We have developed an image storage and retrieval system that makes use of a Super-VHS video tape recorder, and a personal computer fitted with an interface board and a video frame grabber. Under PC control, video images are acquired into the frame grabber, a numeric bar code is graphically superimposed for identification purposes, and the composite images are recorded on video tape. During retrieval, the bar code is decoded in real-time and the desired images are automatically retrieved. This video tape-based system, enables the images to be previewed and retrieved much faster than if stored in digital format. PMID:1769220

  7. Post-processing of compressed video using a unified metric for digital video processing

    NASA Astrophysics Data System (ADS)

    Boroczky, Lilla; Yang, Yibin

    2004-01-01

    In this paper we propose a novel, post-processing system for compressed video sources. The proposed system explores the interaction between artifact reduction and sharpness/resolution enhancement to achieve optimal video quality for compressed (e.g. MPEG-2) sources. It is based on the Unified Metric for Digital Video Processing (UMDVP), which adaptively controls the post-processing algorithms according to the coding characteristics of the decoded video. The experiments carried out on several MPEG-2 encoded video sequences have shown significant improvement in picture quality compared to a system without the UMDVP control and to a system that did not exploit the interaction between artifact reduction and video enhancement. The UMDVP as well the proposed post-processing system can be easily adapted for different coding standard, such as MPEG-4, H.26x.

  8. Overview of AVS-video: tools, performance and complexity

    NASA Astrophysics Data System (ADS)

    Yu, Lu; Yi, Feng; Dong, Jie; Zhang, Cixun

    2005-07-01

    Audio Video coding Standard (AVS) is established by the Working Group of China in the same name. AVS-video is an application driven coding standard. AVS Part 2 targets to high-definition digital video broadcasting and high-density storage media and AVS Part 7 targets to low complexity, low picture resolution mobility applications. Integer transform, intra and inter-picture prediction, in-loop deblocking filter and context-based two dimensional variable length coding are the major compression tools in AVS-video, which are well-tuned for target applications. It achieves similar performance to H.264/AVC with lower cost.

  9. Digital CODEC for real-time processing of broadcast quality video signals at 1.8 bits/pixel

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary JO; Whyte, Wayne A.

    1991-01-01

    Advances in very large scale integration and recent work in the field of bandwidth efficient digital modulation techniques have combined to make digital video processing technically feasible an potentially cost competitive for broadcast quality television transmission. A hardware implementation was developed for DPCM (differential pulse code midulation)-based digital television bandwidth compression algorithm which processes standard NTSC composite color television signals and produces broadcast quality video in real time at an average of 1.8 bits/pixel. The data compression algorithm and the hardware implementation of the codec are described, and performance results are provided.

  10. Teaching Social Studies with Video Games

    ERIC Educational Resources Information Center

    Maguth, Brad M.; List, Jonathan S.; Wunderle, Matthew

    2015-01-01

    Today's youth have grown up immersed in technology and are increasingly relying on video games to solve problems, engage socially, and find entertainment. Yet research and vignettes of teachers actually using video games to advance student learning in social studies is scarce (Hutchinson 2007). This article showcases how social studies…

  11. Commercial Video Games in the Science Classroom

    ERIC Educational Resources Information Center

    Angelone, Lauren

    2010-01-01

    There's no denying that middle school students are interested in video games. With such motivation present, we as teachers should harness this media in a productive way in our classrooms. Students today are much more technologically advanced than ever before, and using video games is one more way to use something from their world as a teaching…

  12. This Rock 'n' Roll Video Teaches Math

    ERIC Educational Resources Information Center

    Niess, Margaret L.; Walker, Janet M.

    2009-01-01

    Mathematics is a discipline that has significantly advanced through the use of digital technologies with improved computational, graphical, and symbolic capabilities. Digital videos can be used to present challenging mathematical questions for students. Video clips offer instructional possibilities for moving students from a passive mode of…

  13. Fulldome Video: An Emerging Technology for Education

    ERIC Educational Resources Information Center

    Law, Linda E.

    2006-01-01

    This article talks about fulldome video, a new technology which has been adopted fairly extensively by the larger, well-funded planetariums. Fulldome video, also called immersive projection, can help teach subjects ranging from geology to history to chemistry. The rapidly advancing progress of projection technology has provided high-resolution…

  14. Video Analysis with a Web Camera

    ERIC Educational Resources Information Center

    Wyrembeck, Edward P.

    2009-01-01

    Recent advances in technology have made video capture and analysis in the introductory physics lab even more affordable and accessible. The purchase of a relatively inexpensive web camera is all you need if you already have a newer computer and Vernier's Logger Pro 3 software. In addition to Logger Pro 3, other video analysis tools such as…

  15. State Skill Standards: Digital Video & Broadcast Production

    ERIC Educational Resources Information Center

    Bullard, Susan; Tanner, Robin; Reedy, Brian; Grabavoi, Daphne; Ertman, James; Olson, Mark; Vaughan, Karen; Espinola, Ron

    2007-01-01

    The standards in this document are for digital video and broadcast production programs and are designed to clearly state what the student should know and be able to do upon completion of an advanced high-school program. Digital Video and Broadcast Production is a program that consists of the initial fundamentals and sequential courses that prepare…

  16. Effect of video decoder errors on video interpretability

    NASA Astrophysics Data System (ADS)

    Young, Darrell L.

    2014-06-01

    The advancement in video compression technology can result in more sensitivity to bit errors. Bit errors can propagate causing sustained loss of interpretability. In the worst case, the decoder "freezes" until it can re-synchronize with the stream. Detection of artifacts enables downstream processes to avoid corrupted frames. A simple template approach to detect block stripes and a more advanced cascade approach to detect compression artifacts was shown to correlate to the presence of artifacts and decoder messages.

  17. Experimental and Thermalhydraulic Code Assessment of the Transient Behavior of the Passive Condenser System in an Advanced Boiling Water Reactor

    SciTech Connect

    S.T. Revankar; W. Zhou; Gavin Henderson

    2008-07-08

    The main goal of the project was to study analytically and experimentally the condensation heat transfer for the passive condenser system such as GE Economic Simplified Boiling Water Reactor (ESBWR). The effect of noncondensable gas in condenser tube and the reduction of secondary pool water level to the condensation heat transfer coefficient was the main focus in this research. The objectives of this research were to : 1) obtain experimental data on the local and tube averaged condensation heat transfer rates for the PCCS with non-condensable and with change in the secondary pool water, 2) assess the RELAP5 and TRACE computer code against the experimental data, and 3) develop mathematical model and ehat transfer correlation for the condensation phenomena for system code application. The project involves experimentation, theoretical model development and verification, and thermal- hydraulic codes assessment.

  18. Video System Highlights Hydrogen Fires

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Gleman, Stuart M.; Moerk, John S.

    1992-01-01

    Video system combines images from visible spectrum and from three bands in infrared spectrum to produce color-coded display in which hydrogen fires distinguished from other sources of heat. Includes linear array of 64 discrete lead selenide mid-infrared detectors operating at room temperature. Images overlaid on black and white image of same scene from standard commercial video camera. In final image, hydrogen fires appear red; carbon-based fires, blue; and other hot objects, mainly green and combinations of green and red. Where no thermal source present, image remains in black and white. System enables high degree of discrimination between hydrogen flames and other thermal emitters.

  19. Electromagnetic self-consistent field initialization and fluid advance techniques for hybrid-kinetic PWFA code Architect

    NASA Astrophysics Data System (ADS)

    Massimo, F.; Marocchino, A.; Rossi, A. R.

    2016-09-01

    The realization of Plasma Wakefield Acceleration experiments with high quality of the accelerated bunches requires an increasing number of numerical simulations to perform first-order assessments for the experimental design and online-analysis of the experimental results. Particle in Cell codes are the state-of-the-art tools to study the beam-plasma interaction mechanism, but due to their requirements in terms of number of cores and computational time makes them unsuitable for quick parametric scans. Considerable interest has been shown thus in methods which reduce the computational time needed for the simulation of plasma acceleration. Such methods include the use of hybrid kinetic-fluid models, which treat the relativistic bunches as in a PIC code and the background plasma electrons as a fluid. A technique to properly initialize the bunch electromagnetic fields in the time explicit hybrid kinetic-fluid code Architect is presented, as well the implementation of the Flux Corrected Transport scheme for the fluid equations integrated in the code.

  20. Advanced Technology Airfoil Research, volume 1, part 1. [conference on development of computational codes and test facilities

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A comprehensive review of all NASA airfoil research, conducted both in-house and under grant and contract, as well as a broad spectrum of airfoil research outside of NASA is presented. Emphasis is placed on the development of computational aerodynamic codes for airfoil analysis and design, the development of experimental facilities and test techniques, and all types of airfoil applications.

  1. Time synchronized video systems

    NASA Technical Reports Server (NTRS)

    Burnett, Ron

    1994-01-01

    The idea of synchronizing multiple video recordings to some type of 'range' time has been tried to varying degrees of success in the past. Combining this requirement with existing time code standards (SMPTE) and the new innovations in desktop multimedia however, have afforded an opportunity to increase the flexibility and usefulness of such efforts without adding costs over the traditional data recording and reduction systems. The concept described can use IRIG, GPS or a battery backed internal clock as the master time source. By converting that time source to Vertical Interval Time Code or Longitudinal Time Code, both in accordance with the SMPTE standards, the user will obtain a tape that contains machine/computer readable time code suitable for use with editing equipment that is available off-the-shelf. Accuracy on playback is then determined by the playback system chosen by the user. Accuracies of +/- 2 frames are common among inexpensive systems and complete frame accuracy is more a matter of the users' budget than the capability of the recording system.

  2. Overcoming Challenges: "Going Mobile with Your Own Video Models"

    ERIC Educational Resources Information Center

    Carnahan, Christina R.; Basham, James D.; Christman, Jennifer; Hollingshead, Aleksandra

    2012-01-01

    Video modeling has been shown to be an effective intervention for students with a variety of disabilities. Traditional video models present problems in terms of application across meaningful settings, such as in the community or even across the school environment. However, with advances in mobile technology, portable devices with video capability…

  3. Implementation issues in source coding

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Yun-Chung; Hadenfeldt, A. C.

    1989-01-01

    An edge preserving image coding scheme which can be operated in both a lossy and a lossless manner was developed. The technique is an extension of the lossless encoding algorithm developed for the Mars observer spectral data. It can also be viewed as a modification of the DPCM algorithm. A packet video simulator was also developed from an existing modified packet network simulator. The coding scheme for this system is a modification of the mixture block coding (MBC) scheme described in the last report. Coding algorithms for packet video were also investigated.

  4. Speech coding

    SciTech Connect

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  5. Compressed sensing based video multicast

    NASA Astrophysics Data System (ADS)

    Schenkel, Markus B.; Luo, Chong; Frossard, Pascal; Wu, Feng

    2010-07-01

    We propose a new scheme for wireless video multicast based on compressed sensing. It has the property of graceful degradation and, unlike systems adhering to traditional separate coding, it does not suffer from a cliff effect. Compressed sensing is applied to generate measurements of equal importance from a video such that a receiver with a better channel will naturally have more information at hands to reconstruct the content without penalizing others. We experimentally compare different random matrices at the encoder side in terms of their performance for video transmission. We further investigate how properties of natural images can be exploited to improve the reconstruction performance by transmitting a small amount of side information. And we propose a way of exploiting inter-frame correlation by extending only the decoder. Finally we compare our results with a different scheme targeting the same problem with simulations and find competitive results for some channel configurations.

  6. Application of advanced computational procedures for modeling solar-wind interactions with Venus: Theory and computer code

    NASA Technical Reports Server (NTRS)

    Stahara, S. S.; Klenke, D.; Trudinger, B. C.; Spreiter, J. R.

    1980-01-01

    Computational procedures are developed and applied to the prediction of solar wind interaction with nonmagnetic terrestrial planet atmospheres, with particular emphasis to Venus. The theoretical method is based on a single fluid, steady, dissipationless, magnetohydrodynamic continuum model, and is appropriate for the calculation of axisymmetric, supersonic, super-Alfvenic solar wind flow past terrestrial planets. The procedures, which consist of finite difference codes to determine the gasdynamic properties and a variety of special purpose codes to determine the frozen magnetic field, streamlines, contours, plots, etc. of the flow, are organized into one computational program. Theoretical results based upon these procedures are reported for a wide variety of solar wind conditions and ionopause obstacle shapes. Plasma and magnetic field comparisons in the ionosheath are also provided with actual spacecraft data obtained by the Pioneer Venus Orbiter.

  7. Video Clips for Youtube: Collaborative Video Creation as an Educational Concept for Knowledge Acquisition and Attitude Change Related to Obesity Stigmatization

    ERIC Educational Resources Information Center

    Zahn, Carmen; Schaeffeler, Norbert; Giel, Katrin Elisabeth; Wessel, Daniel; Thiel, Ansgar; Zipfel, Stephan; Hesse, Friedrich W.

    2014-01-01

    Mobile phones and advanced web-based video tools have pushed forward new paradigms for using video in education: Today, students can readily create and broadcast their own digital videos for others and create entirely new patterns of video-based information structures for modern online-communities and multimedia environments. This paradigm shift…

  8. Motion-adaptive compressive coded apertures

    NASA Astrophysics Data System (ADS)

    Harmany, Zachary T.; Oh, Albert; Marcia, Roummel; Willett, Rebecca

    2011-09-01

    This paper describes an adaptive compressive coded aperture imaging system for video based on motion-compensated video sparsity models. In particular, motion models based on optical flow and sparse deviations from optical flow (i.e. salient motion) can be used to (a) predict future video frames from previous compressive measurements, (b) perform reconstruction using efficient online convex programming techniques, and (c) adapt the coded aperture to yield higher reconstruction fidelity in the vicinity of this salient motion.

  9. Real-time video streaming in mobile cloud over heterogeneous wireless networks

    NASA Astrophysics Data System (ADS)

    Abdallah-Saleh, Saleh; Wang, Qi; Grecos, Christos

    2012-06-01

    Recently, the concept of Mobile Cloud Computing (MCC) has been proposed to offload the resource requirements in computational capabilities, storage and security from mobile devices into the cloud. Internet video applications such as real-time streaming are expected to be ubiquitously deployed and supported over the cloud for mobile users, who typically encounter a range of wireless networks of diverse radio access technologies during their roaming. However, real-time video streaming for mobile cloud users across heterogeneous wireless networks presents multiple challenges. The network-layer quality of service (QoS) provision to support high-quality mobile video delivery in this demanding scenario remains an open research question, and this in turn affects the application-level visual quality and impedes mobile users' perceived quality of experience (QoE). In this paper, we devise a framework to support real-time video streaming in this new mobile video networking paradigm and evaluate the performance of the proposed framework empirically through a lab-based yet realistic testing platform. One particular issue we focus on is the effect of users' mobility on the QoS of video streaming over the cloud. We design and implement a hybrid platform comprising of a test-bed and an emulator, on which our concept of mobile cloud computing, video streaming and heterogeneous wireless networks are implemented and integrated to allow the testing of our framework. As representative heterogeneous wireless networks, the popular WLAN (Wi-Fi) and MAN (WiMAX) networks are incorporated in order to evaluate effects of handovers between these different radio access technologies. The H.264/AVC (Advanced Video Coding) standard is employed for real-time video streaming from a server to mobile users (client nodes) in the networks. Mobility support is introduced to enable continuous streaming experience for a mobile user across the heterogeneous wireless network. Real-time video stream packets

  10. Wavelet-based audio embedding and audio/video compression

    NASA Astrophysics Data System (ADS)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  11. Adaptive dynamic programming for auto-resilient video streaming

    NASA Astrophysics Data System (ADS)

    Zhao, Juan; Li, Xingmei; Wang, Wei; Wu, Guoping

    2007-11-01

    Wireless video transmission encounters higher error rate than in wired network, which introduces distortion into the error-sensitive compressed data, reducing the quality of the playback video. Therefore, to ensure the end-to-end quality, wireless video needs a transmission system including both efficient source coding scheme and transmission technology against the influence of the channel error. This paper tackles a dynamic programming algorithm for robust video streaming over error-prone channels. An auto-resilient multiple-description coding with optimized transmission strategy has been proposed. Further study is done on the computational complexity of rate-distortion optimized video streaming and a dynamic programming algorithm is considered. Experiment results show that video streaming with adaptive dynamic programming gains better playback video quality at the receiver when transmitted through error-prone mobile channel.

  12. Technical and economic feasibility of integrated video service by satellite

    NASA Technical Reports Server (NTRS)

    Price, K. M.; Kwan, R. K.; White, L. W.; Garlow, R. K.; Henderson, T. R.

    1992-01-01

    A feasibility study is presented of utilizing modern satellite technology, or more advanced technology, to create a cost-effective, user-friendly, integrated video service, which can provide videophone, video conference, or other equivalent wideband service on demand. A system is described that permits a user to select a desired audience and establish the required links similar to arranging a teleconference by phone. Attention is given to video standards, video traffic scenarios, satellite system architecture, and user costs.

  13. Artifact reduction for MPEG-2 encoded video using a unified metric for digital video processing

    NASA Astrophysics Data System (ADS)

    Boroczky, Lilla; Yang, Yibin

    2003-06-01

    In this paper we propose a new deringing algorithm for MPEG-2 encoded video. It is based on a Unified Metric for Digital Video Processing (UMDVP) and therefore directly linked to the coding characteristics of the decoded video. Experiments carried out on various video sequences have shown noticeable improvement in picture quality and the proposed algorithm outperforms the deringing algorithm described in the MPEG-4 video standard. Coding artifacts, particularly ringing artifacts, are especially annoying on large high-resolution displays. To prevent the enlargement and enhancement of the ringing artifacts, we have applied the proposed deringing algorithm prior to resolution enhancement. Experiments have shown that in this configuration, the new deringing algorithm has significant positive impact on picture quality.

  14. Advances in video game methods and reporting practices (but still room for improvement): a commentary on Strobach, Frensch, and Schubert (2012).

    PubMed

    Boot, Walter R; Simons, Daniel J

    2012-10-01

    Strobach, Frensch, and Schubert (2012) presented evidence that action video game experience improves task-switching and reduces dual-task costs. Their design commendably adhered to many of the guidelines proposed by Boot, Blakely and Simons (2011) to overcome common method and interpretation problems in this literature. Adherence to these method guidelines is necessary in order to reduce the influence of demand characteristics, placebo effects, and underreporting that might otherwise produce false positive findings. In their paper, Strobach et al. (2012) appear to have misinterpreted some of these proposed guidelines, meaning that their methods did not eliminate possible sources of demand characteristics and differential placebo effects. At this important, early stage of video game research, reducing the likelihood of false positive findings is essential. In this commentary we clarify our methodological critiques and guidelines, identify ways in which this new study did and did not meet these guidelines, and discuss how these methodological issues should constrain the interpretation of the reported evidence. PMID:22964029

  15. Dynamic video summarization of home video

    NASA Astrophysics Data System (ADS)

    Lienhart, Rainer W.

    1999-12-01

    An increasing number of people own and use camcorders to make videos that capture their experiences and document their lives. These videos easily add up to many hours of material. Oddly, most of them are put into a storage box and never touched or watched again. The reasons for this are manifold. Firstly, the raw video material is unedited, and is therefore long-winded and lacking visually appealing effects. Video editing would help, but, it is still too time-consuming; people rarely find the time to do it. Secondly, watching the same tape more than a few times can be boring, since the video lacks any variation or surprise during playback. Automatic video abstracting algorithms can provide a method for processing videos so that users will want to play the material more often. However, existing automatic abstracting algorithms have been designed for feature films, newscasts or documentaries, and thus are inappropriate for home video material and raw video footage in general. In this paper, we present new algorithms for generating amusing, visually appealing and variable video abstracts of home video material automatically. They make use of a new, empirically motivated approach, also presented in the paper, to cluster time-stamped shots hierarchically into meaningful units. Last but not least, we propose a simple and natural extension of the way people acquire video - so-called on-the-fly annotations - which will allow a completely new set of applications on raw video footage as well as enable better and more selective automatic video abstracts. Moreover, our algorithms are not restricted to home video but can also be applied to raw video footage in general.

  16. Heuristic dynamic complexity coding

    NASA Astrophysics Data System (ADS)

    Škorupa, Jozef; Slowack, Jürgen; Mys, Stefaan; Lambert, Peter; Van de Walle, Rik

    2008-04-01

    Distributed video coding is a new video coding paradigm that shifts the computational intensive motion estimation from encoder to decoder. This results in a lightweight encoder and a complex decoder, as opposed to the predictive video coding scheme (e.g., MPEG-X and H.26X) with a complex encoder and a lightweight decoder. Both schemas, however, do not have the ability to adapt to varying complexity constraints imposed by encoder and decoder, which is an essential ability for applications targeting a wide range of devices with different complexity constraints or applications with temporary variable complexity constraints. Moreover, the effect of complexity adaptation on the overall compression performance is of great importance and has not yet been investigated. To address this need, we have developed a video coding system with the possibility to adapt itself to complexity constraints by dynamically sharing the motion estimation computations between both components. On this system we have studied the effect of the complexity distribution on the compression performance. This paper describes how motion estimation can be shared using heuristic dynamic complexity and how distribution of complexity affects the overall compression performance of the system. The results show that the complexity can indeed be shared between encoder and decoder in an efficient way at acceptable rate-distortion performance.

  17. Efficient Foreground Extraction From HEVC Compressed Video for Application to Real-Time Analysis of Surveillance 'Big' Data.

    PubMed

    Dey, Bhaskar; Kundu, Malay K

    2015-11-01

    While surveillance video is the biggest source of unstructured Big Data today, the emergence of high-efficiency video coding (HEVC) standard is poised to have a huge role in lowering the costs associated with transmission and storage. Among the benefits of HEVC over the legacy MPEG-4 Advanced Video Coding (AVC), is a staggering 40 percent or more bitrate reduction at the same visual quality. Given the bandwidth limitations, video data are compressed essentially by removing spatial and temporal correlations that exist in its uncompressed form. This causes compressed data, which are already de-correlated, to serve as a vital resource for machine learning with significantly fewer samples for training. In this paper, an efficient approach to foreground extraction/segmentation is proposed using novel spatio-temporal de-correlated block features extracted directly from the HEVC compressed video. Most related techniques, in contrast, work on uncompressed images claiming significant storage and computational resources not only for the decoding process prior to initialization but also for the feature selection/extraction and background modeling stage following it. The proposed approach has been qualitatively and quantitatively evaluated against several other state-of-the-art methods. PMID:26087487

  18. Secure Video Surveillance System Acquisition Software

    SciTech Connect

    2009-12-04

    The SVSS Acquisition Software collects and displays video images from two cameras through a VPN, and store the images onto a collection controller. The software is configured to allow a user to enter a time window to display up to 2 1/2, hours of video review. The software collects images from the cameras at a rate of 1 image per second and automatically deletes images older than 3 hours. The software code operates in a linux environment and can be run in a virtual machine on Windows XP. The Sandia software integrates the different COTS software together to build the video review system.

  19. Secure Video Surveillance System Acquisition Software

    2009-12-04

    The SVSS Acquisition Software collects and displays video images from two cameras through a VPN, and store the images onto a collection controller. The software is configured to allow a user to enter a time window to display up to 2 1/2, hours of video review. The software collects images from the cameras at a rate of 1 image per second and automatically deletes images older than 3 hours. The software code operates in amore » linux environment and can be run in a virtual machine on Windows XP. The Sandia software integrates the different COTS software together to build the video review system.« less

  20. Transmission of scalable video over networks

    NASA Astrophysics Data System (ADS)

    Shi, Xuli; Zhang, ZhaoYang

    2003-09-01

    In this paper, we proposed a new object-based coding algorithm by using wavelet transform to instead of the image encoder algorithm by using FGS in MPEG-4. The new object-based coding algorithm combines motion estimation with object-based 3-D wavelet transform for video coding in order to fully utilize the redundancy in the time domain. The shape-adaptive algorithm based on modifying boundary extension method of lifting scheme. A sequence of VOPs are fed into the motion compensated lifting (MCLIFT) wavelet coder which first decomposes the VOPs temporarily through MCLIFT filter, and then decompresses the VOPs spatially by shape adaptive lifting wavelet transform (SA-TWT). We encode the video and represent the stream as multilayer bit stream. The integrated transport-decoder buffer ensure the video be continuously transmitted. Losing package can be recovered by using re-transmission.

  1. The Video Book.

    ERIC Educational Resources Information Center

    Clendenin, Bruce

    This book provides a comprehensive step-by-step learning guide to video production. It begins with camera equipment, both still and video. It then describes how to reassemble the video and build a final product out of "video blocks," and discusses multiple-source configurations, which are required for professional level productions of live shows.…

  2. No Fuss Video

    ERIC Educational Resources Information Center

    Doyle, Al

    2006-01-01

    Ever since video became readily available with the advent of the VCR, educators have been clamoring for easier ways to integrate the medium into the classroom. Today, thanks to broadband access and ever-expanding offerings, engaging students with high-quality video has never been easier. Video-on-demand (VOD) services provide bite-size video clips…

  3. Characterization of social video

    NASA Astrophysics Data System (ADS)

    Ostrowski, Jeffrey R.; Sarhan, Nabil J.

    2009-01-01

    The popularity of social media has grown dramatically over the World Wide Web. In this paper, we analyze the video popularity distribution of well-known social video websites (YouTube, Google Video, and the AOL Truveo Video Search engine) and characterize their workload. We identify trends in the categories, lengths, and formats of those videos, as well as characterize the evolution of those videos over time. We further provide an extensive analysis and comparison of video content amongst the main regions of the world.

  4. Video conferencing made easy

    NASA Technical Reports Server (NTRS)

    Larsen, D. Gail; Schwieder, Paul R.

    1993-01-01

    Network video conferencing is advancing rapidly throughout the nation, and the Idaho National Engineering Laboratory (INEL), a Department of Energy (DOE) facility, is at the forefront of the development. Engineers at INEL/EG&G designed and installed a very unique DOE videoconferencing system, offering many outstanding features, that include true multipoint conferencing, user-friendly design and operation with no full-time operators required, and the potential for cost effective expansion of the system. One area where INEL/EG&G engineers made a significant contribution to video conferencing was in the development of effective, user-friendly, end station driven scheduling software. A PC at each user site is used to schedule conferences via a windows package. This software interface provides information to the users concerning conference availability, scheduling, initiation, and termination. The menus are 'mouse' controlled. Once a conference is scheduled, a workstation at the hubs monitors the network to initiate all scheduled conferences. No active operator participation is required once a user schedules a conference through the local PC; the workstation automatically initiates and terminates the conference as scheduled. As each conference is scheduled, hard copy notification is also printed at each participating site. Video conferencing is the wave of the future. The use of these user-friendly systems will save millions in lost productivity and travel cost throughout the nation. The ease of operation and conference scheduling will play a key role on the extent industry uses this new technology. The INEL/EG&G has developed a prototype scheduling system for both commercial and federal government use.

  5. Mise en situation analogique, ou de la video a un niveau avance (Real Life Situations, or the Use of Visuals at the Advanced Level).

    ERIC Educational Resources Information Center

    Marino, Ingrid

    1982-01-01

    Cognitive theories normally applied to listening comprehension are applied to the use of visuals with mass media broadcasts for second language instruction: the visual image as advance organizer, as reinforcement of concrete information, and as an association with abstract information. A related study in Germany is cited and illustrated. (MSE)

  6. Code inspection instructional validation

    NASA Technical Reports Server (NTRS)

    Orr, Kay; Stancil, Shirley

    1992-01-01

    The Shuttle Data Systems Branch (SDSB) of the Flight Data Systems Division (FDSD) at Johnson Space Center contracted with Southwest Research Institute (SwRI) to validate the effectiveness of an interactive video course on the code inspection process. The purpose of this project was to determine if this course could be effective for teaching NASA analysts the process of code inspection. In addition, NASA was interested in the effectiveness of this unique type of instruction (Digital Video Interactive), for providing training on software processes. This study found the Carnegie Mellon course, 'A Cure for the Common Code', effective for teaching the process of code inspection. In addition, analysts prefer learning with this method of instruction, or this method in combination with other methods. As is, the course is definitely better than no course at all; however, findings indicate changes are needed. Following are conclusions of this study. (1) The course is instructionally effective. (2) The simulation has a positive effect on student's confidence in his ability to apply new knowledge. (3) Analysts like the course and prefer this method of training, or this method in combination with current methods of training in code inspection, over the way training is currently being conducted. (4) Analysts responded favorably to information presented through scenarios incorporating full motion video. (5) Some course content needs to be changed. (6) Some content needs to be added to the course. SwRI believes this study indicates interactive video instruction combined with simulation is effective for teaching software processes. Based on the conclusions of this study, SwRI has outlined seven options for NASA to consider. SwRI recommends the option which involves creation of new source code and data files, but uses much of the existing content and design from the current course. Although this option involves a significant software development effort, SwRI believes this option

  7. Video imaging systems: A survey

    SciTech Connect

    Kefauver, H.L.

    1989-07-01

    Recent technological advances in the field of electronics have made video imaging a viable substitute for the traditional Polaroid/trademark/ picture used to create photo ID credentials. New families of hardware and software products, when integrated into a system, provide an exciting and powerful toll which can be used simply to make badges or enhance an access control system. This report is designed to make the reader aware of who is currently in this business and compare their capabilities.

  8. Future trends in image coding

    NASA Astrophysics Data System (ADS)

    Habibi, Ali

    1993-01-01

    The objective of this article is to present a discussion on the future of image data compression in the next two decades. It is virtually impossible to predict with any degree of certainty the breakthroughs in theory and developments, the milestones in advancement of technology and the success of the upcoming commercial products in the market place which will be the main factors in establishing the future stage to image coding. What we propose to do, instead, is look back at the progress in image coding during the last two decades and assess the state of the art in image coding today. Then, by observing the trends in developments of theory, software, and hardware coupled with the future needs for use and dissemination of imagery data and the constraints on the bandwidth and capacity of various networks, predict the future state of image coding. What seems to be certain today is the growing need for bandwidth compression. The television is using a technology which is half a century old and is ready to be replaced by high definition television with an extremely high digital bandwidth. Smart telephones coupled with personal computers and TV monitors accommodating both printed and video data will be common in homes and businesses within the next decade. Efficient and compact digital processing modules using developing technologies will make bandwidth compressed imagery the cheap and preferred alternative in satellite and on-board applications. In view of the above needs, we expect increased activities in development of theory, software, special purpose chips and hardware for image bandwidth compression in the next two decades. The following sections summarize the future trends in these areas.

  9. Rate control scheme for consistent video quality in scalable video codec.

    PubMed

    Seo, Chan-Won; Han, Jong-Ki; Nguyen, Truong Q

    2011-08-01

    Multimedia data delivered to mobile devices over wireless channels or the Internet are complicated by bandwidth fluctuation and the variety of mobile devices. Scalable video coding has been developed as an extension of H.264/AVC to solve this problem. Since scalable video codec provides various scalabilities to adapt the bitstream for the channel conditions and terminal types, scalable codec is one of the useful codecs for wired or wireless multimedia communication systems, such as IPTV and streaming services. In such scalable multimedia communication systems, video quality fluctuation degrades the visual perception significantly. It is important to efficiently use the target bits in order to maintain a consistent video quality or achieve a small distortion variation throughout the whole video sequence. The scheme proposed in this paper provides a useful function to control video quality in applications supporting scalability, whereas conventional schemes have been proposed to control video quality in the H.264 and MPEG-4 systems. The proposed algorithm decides the quantization parameter of the enhancement layer to maintain a consistent video quality throughout the entire sequence. The video quality of the enhancement layer is controlled based on a closed-form formula which utilizes the residual data and quantization error of the base layer. The simulation results show that the proposed algorithm controls the frame quality of the enhancement layer in a simple operation, where the parameter decision algorithm is applied to each frame. PMID:21411408

  10. Video Browsing on Handheld Devices

    NASA Astrophysics Data System (ADS)

    Hürst, Wolfgang

    Recent improvements in processing power, storage space, and video codec development enable users now to playback video on their handheld devices in a reasonable quality. However, given the form factor restrictions of such a mobile device, screen size still remains a natural limit and - as the term "handheld" implies - always will be a critical resource. This is not only true for video but any data that is processed on such devices. For this reason, developers have come up with new and innovative ways to deal with large documents in such limited scenarios. For example, if you look at the iPhone, innovative techniques such as flicking have been introduced to skim large lists of text (e.g. hundreds of entries in your music collection). Automatically adapting the zoom level to, for example, the width of table cells when double tapping on the screen enables reasonable browsing of web pages that have originally been designed for large, desktop PC sized screens. A multi touch interface allows you to easily zoom in and out of large text documents and images using two fingers. In the next section, we will illustrate that advanced techniques to browse large video files have been developed in the past years, as well. However, if you look at state-of-the-art video players on mobile devices, normally just simple, VCR like controls are supported (at least at the time of this writing) that only allow users to just start, stop, and pause video playback. If supported at all, browsing and navigation functionality is often restricted to simple skipping of chapters via two single buttons for backward and forward navigation and a small and thus not very sensitive timeline slider.

  11. The Impact of Video Review on Supervisory Conferencing

    ERIC Educational Resources Information Center

    Baecher, Laura; McCormack, Bede

    2015-01-01

    This study investigated how video-based observation may alter the nature of post-observation talk between supervisors and teacher candidates. Audio-recorded post-observation conversations were coded using a conversation analysis framework and interpreted through the lens of interactional sociology. Findings suggest that video-based observations…

  12. Adaptive live multicast video streaming of SVC with UEP FEC

    NASA Astrophysics Data System (ADS)

    Lev, Avram; Lasry, Amir; Loants, Maoz; Hadar, Ofer

    2014-09-01

    Ideally, video streaming systems should provide the best quality video a user's device can handle without compromising on downloading speed. In this article, an improved video transmission system is presented which dynamically enhances the video quality based on a user's current network state and repairs errors from data lost in the video transmission. The system incorporates three main components: Scalable Video Coding (SVC) with three layers, multicast based on Receiver Layered Multicast (RLM) and an UnEqual Forward Error Correction (FEC) algorithm. The SVC provides an efficient method for providing different levels of video quality, stored as enhancement layers. In the presented system, a proportional-integral-derivative (PID) controller was implemented to dynamically adjust the video quality, adding or subtracting quality layers as appropriate. In addition, an FEC algorithm was added to compensate for data lost in transmission. A two dimensional FEC was used. The FEC algorithm came from the Pro MPEG code of practice #3 release 2. Several bit errors scenarios were tested (step function, cosine wave) with different bandwidth size and error values were simulated. The suggested scheme which includes SVC video encoding with 3 layers over IP Multicast with Unequal FEC algorithm was investigated under different channel conditions, variable bandwidths and different bit error rates. The results indicate improvement of the video quality in terms of PSNR over previous transmission schemes.

  13. Do the Depictions of Sexual Attire and Sexual Behavior in Music Videos Differ Based on Video Network and Character Gender?

    ERIC Educational Resources Information Center

    King, Keith; Laake, Rebecca A.; Bernard, Amy

    2006-01-01

    This study examined the sexual messages depicted in music videos aired on MTV, MTV2, BET, and GAC from August 2, 2004 to August 15, 2004. One-hour segments of music videos were taped daily for two weeks. Depictions of sexual attire and sexual behavior were analyzed via a four-page coding sheet (interrater-reliability = 0.93). Results indicated…

  14. Video Player Keyboard Shortcuts

    MedlinePlus

    ... https://www.nlm.nih.gov/medlineplus/hotkeys.html Video Player Keyboard Shortcuts To use the sharing features ... of accessible keyboard shortcuts for our latest Health videos on the MedlinePlus site. These shortcuts allow you ...

  15. Video Screen Capture Basics

    ERIC Educational Resources Information Center

    Dunbar, Laura

    2014-01-01

    This article is an introduction to video screen capture. Basic information of two software programs, QuickTime for Mac and BlueBerry Flashback Express for PC, are also discussed. Practical applications for video screen capture are given.

  16. Video watermarking for mobile phone applications

    NASA Astrophysics Data System (ADS)

    Mitrea, M.; Duta, S.; Petrescu, M.; Preteux, F.

    2005-08-01

    Nowadays, alongside with the traditional voice signal, music, video, and 3D characters tend to become common data to be run, stored and/or processed on mobile phones. Hence, to protect their related intellectual property rights also becomes a crucial issue. The video sequences involved in such applications are generally coded at very low bit rates. The present paper starts by presenting an accurate statistical investigation on such a video as well as on a very dangerous attack (the StirMark attack). The obtained results are turned into practice when adapting a spread spectrum watermarking method to such applications. The informed watermarking approach was also considered: an outstanding method belonging to this paradigm has been adapted and re evaluated under the low rate video constraint. The experimental results were conducted in collaboration with the SFR mobile services provider in France. They also allow a comparison between the spread spectrum and informed embedding techniques.

  17. Privacy enabling technology for video surveillance

    NASA Astrophysics Data System (ADS)

    Dufaux, Frédéric; Ouaret, Mourad; Abdeljaoued, Yousri; Navarro, Alfonso; Vergnenègre, Fabrice; Ebrahimi, Touradj

    2006-05-01

    In this paper, we address the problem privacy in video surveillance. We propose an efficient solution based on transformdomain scrambling of regions of interest in a video sequence. More specifically, the sign of selected transform coefficients is flipped during encoding. We address more specifically the case of Motion JPEG 2000. Simulation results show that the technique can be successfully applied to conceal information in regions of interest in the scene while providing with a good level of security. Furthermore, the scrambling is flexible and allows adjusting the amount of distortion introduced. This is achieved with a small impact on coding performance and negligible computational complexity increase. In the proposed video surveillance system, heterogeneous clients can remotely access the system through the Internet or 2G/3G mobile phone network. Thanks to the inherently scalable Motion JPEG 2000 codestream, the server is able to adapt the resolution and bandwidth of the delivered video depending on the usage environment of the client.

  18. Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) verification and validation plan. version 1.

    SciTech Connect

    Bartlett, Roscoe Ainsworth; Arguello, Jose Guadalupe, Jr.; Urbina, Angel; Bouchard, Julie F.; Edwards, Harold Carter; Freeze, Geoffrey A.; Knupp, Patrick Michael; Wang, Yifeng; Schultz, Peter Andrew; Howard, Robert; McCornack, Marjorie Turner

    2011-01-01

    The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. To meet this objective, NEAMS Waste IPSC M&S capabilities will be applied to challenging spatial domains, temporal domains, multiphysics couplings, and multiscale couplings. A strategic verification and validation (V&V) goal is to establish evidence-based metrics for the level of confidence in M&S codes and capabilities. Because it is economically impractical to apply the maximum V&V rigor to each and every M&S capability, M&S capabilities will be ranked for their impact on the performance assessments of various components of the repository systems. Those M&S capabilities with greater impact will require a greater level of confidence and a correspondingly greater investment in V&V. This report includes five major components: (1) a background summary of the NEAMS Waste IPSC to emphasize M&S challenges; (2) the conceptual foundation for verification, validation, and confidence assessment of NEAMS Waste IPSC M&S capabilities; (3) specifications for the planned verification, validation, and confidence-assessment practices; (4) specifications for the planned evidence information management system; and (5) a path forward for the incremental implementation of this V&V plan.

  19. TEM Video Compressive Sensing

    SciTech Connect

    Stevens, Andrew J.; Kovarik, Libor; Abellan, Patricia; Yuan, Xin; Carin, Lawrence; Browning, Nigel D.

    2015-08-02

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into a single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental

  20. Video Event Detection Framework on Large-Scale Video Data

    ERIC Educational Resources Information Center

    Park, Dong-Jun

    2011-01-01

    Detection of events and actions in video entails substantial processing of very large, even open-ended, video streams. Video data present a unique challenge for the information retrieval community because properly representing video events is challenging. We propose a novel approach to analyze temporal aspects of video data. We consider video data…

  1. Developing a Promotional Video

    ERIC Educational Resources Information Center

    Epley, Hannah K.

    2014-01-01

    There is a need for Extension professionals to show clientele the benefits of their program. This article shares how promotional videos are one way of reaching audiences online. An example is given on how a promotional video has been used and developed using iMovie software. Tips are offered for how professionals can create a promotional video and…

  2. Video-Level Monitor

    NASA Technical Reports Server (NTRS)

    Gregory, Ray W.

    1993-01-01

    Video-level monitor developed to provide full-scene monitoring of video and indicates level of brightest portion. Circuit designed nonspecific and can be inserted in any closed-circuit camera system utilizing RS170 or RS330 synchronization and standard CCTV video levels. System made of readily available, off-the-shelf components. Several units are in service.

  3. Video: Modalities and Methodologies

    ERIC Educational Resources Information Center

    Hadfield, Mark; Haw, Kaye

    2012-01-01

    In this article, we set out to explore what we describe as the use of video in various modalities. For us, modality is a synthesizing construct that draws together and differentiates between the notion of "video" both as a method and as a methodology. It encompasses the use of the term video as both product and process, and as a data collection…

  4. Secure video communications system

    DOEpatents

    Smith, Robert L.

    1991-01-01

    A secure video communications system having at least one command network formed by a combination of subsystems. The combination of subsystems to include a video subsystem, an audio subsystem, a communications subsystem, and a control subsystem. The video communications system to be window driven and mouse operated, and having the ability to allow for secure point-to-point real-time teleconferencing.

  5. Video Self-Modeling

    ERIC Educational Resources Information Center

    Buggey, Tom; Ogle, Lindsey

    2012-01-01

    Video self-modeling (VSM) first appeared on the psychology and education stage in the early 1970s. The practical applications of VSM were limited by lack of access to tools for editing video, which is necessary for almost all self-modeling videos. Thus, VSM remained in the research domain until the advent of camcorders and VCR/DVD players and,…

  6. Video Cartridges and Cassettes.

    ERIC Educational Resources Information Center

    Kletter, Richard C.; Hudson, Heather

    The economic and social significance of video cassettes (viewer-controlled playback system) is explored in this report. The potential effect of video cassettes on industrial training, education, libraries, and television is analyzed in conjunction with the anticipated hardware developments. The entire video cassette industry is reviewed firm by…

  7. Object and activity detection from aerial video

    NASA Astrophysics Data System (ADS)

    Se, Stephen; Shi, Feng; Liu, Xin; Ghazel, Mohsen

    2015-05-01

    Aerial video surveillance has advanced significantly in recent years, as inexpensive high-quality video cameras and airborne platforms are becoming more readily available. Video has become an indispensable part of military operations and is now becoming increasingly valuable in the civil and paramilitary sectors. Such surveillance capabilities are useful for battlefield intelligence and reconnaissance as well as monitoring major events, border control and critical infrastructure. However, monitoring this growing flood of video data requires significant effort from increasingly large numbers of video analysts. We have developed a suite of aerial video exploitation tools that can alleviate mundane monitoring from the analysts, by detecting and alerting objects and activities that require analysts' attention. These tools can be used for both tactical applications and post-mission analytics so that the video data can be exploited more efficiently and timely. A feature-based approach and a pixel-based approach have been developed for Video Moving Target Indicator (VMTI) to detect moving objects at real-time in aerial video. Such moving objects can then be classified by a person detector algorithm which was trained with representative aerial data. We have also developed an activity detection tool that can detect activities of interests in aerial video, such as person-vehicle interaction. We have implemented a flexible framework so that new processing modules can be added easily. The Graphical User Interface (GUI) allows the user to configure the processing pipeline at run-time to evaluate different algorithms and parameters. Promising experimental results have been obtained using these tools and an evaluation has been carried out to characterize their performance.

  8. Facilitation and Teacher Behaviors: An Analysis of Literacy Teachers' Video-Case Discussions

    ERIC Educational Resources Information Center

    Arya, Poonam; Christ, Tanya; Chiu, Ming Ming

    2014-01-01

    This study explored how peer and professor facilitations are related to teachers' behaviors during video-case discussions. Fourteen inservice teachers produced 1,787 turns of conversation during 12 video-case discussions that were video-recorded, transcribed, coded, and analyzed with statistical discourse analysis. Professor facilitations…

  9. Links between Characteristics of Collaborative Peer Video Analysis Events and Literacy Teachers' Outcomes

    ERIC Educational Resources Information Center

    Arya, Poonam; Christ, Tanya; Chiu, Ming

    2015-01-01

    This study examined how characteristics of Collaborative Peer Video Analysis (CPVA) events are related to teachers' pedagogical outcomes. Data included 39 transcribed literacy video events, in which 14 in-service teachers engaged in discussions of their video clips. Emergent coding and Statistical Discourse Analysis were used to analyze the data.…

  10. Examining the Development of a Teacher Learning Community: The Case of a Video Club

    ERIC Educational Resources Information Center

    van Es, Elizabeth A.

    2012-01-01

    Learning communities have become a widespread model for teacher development. However, simply bringing teachers together does not ensure community development. This study offers a framework for the development of a teacher learning community in a video club. Qualitative coding of video data resulted in characterizing the evolution of the video club…

  11. Video Event Trigger

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.; Lichter, Michael J.

    1994-01-01

    Video event trigger (VET) processes video image data to generate trigger signal when image shows significant change like motion or appearance, disappearance, change in color, change in brightness, or dilation of object. System aids in efficient utilization of image-data-storage and image-data-processing equipment in applications in which many video frames show no changes and are wasteful to record and analyze all frames when only relatively few frames show changes of interest. Applications include video recording of automobile crash tests, automated video monitoring of entrances, exits, parking lots, and secure areas.

  12. A Novel Key-Frame Extraction Approach for Both Video Summary and Video Index

    PubMed Central

    Lei, Shaoshuai; Xie, Gang; Yan, Gaowei

    2014-01-01

    Existing key-frame extraction methods are basically video summary oriented; yet the index task of key-frames is ignored. This paper presents a novel key-frame extraction approach which can be available for both video summary and video index. First a dynamic distance separability algorithm is advanced to divide a shot into subshots based on semantic structure, and then appropriate key-frames are extracted in each subshot by SVD decomposition. Finally, three evaluation indicators are proposed to evaluate the performance of the new approach. Experimental results show that the proposed approach achieves good semantic structure for semantics-based video index and meanwhile produces video summary consistent with human perception. PMID:24757431

  13. Simulated performance results of the OMV video compression telemetry system

    NASA Technical Reports Server (NTRS)

    Ingels, Frank; Parker, Glenn; Thomas, Lee Ann

    1989-01-01

    The control system of NASA's Orbital Maneuvering Vehicle (OMV) will employ range/range-rate radar, a forward command link, and a compressed video return link. The video data is compressed by sampling every sixth frame of data; a rate of 5 frames/sec is adequate for the OMV docking speeds. Further axial compression is obtained, albeit at the expense of spatial resolution, by averaging adjacent pixels. The remaining compression is achieved on the basis of differential pulse-code modulation and Huffman run-length encoding. A concatenated error-correction coding system is used to protect the compressed video data stream from channel errors.

  14. Computerized mega code recording.

    PubMed

    Burt, T W; Bock, H C

    1988-04-01

    A system has been developed to facilitate recording of advanced cardiac life support mega code testing scenarios. By scanning a paper "keyboard" using a bar code wand attached to a portable microcomputer, the person assigned to record the scenario can easily generate an accurate, complete, timed, and typewritten record of the given situations and the obtained responses. PMID:3354937

  15. Decision trees for denoising in H.264/AVC video sequences

    NASA Astrophysics Data System (ADS)

    Huchet, G.; Chouinard, J.-Y.; Wang, D.; Vincent, A.

    2008-01-01

    All existing video coding standards are based on block-wise motion compensation and block-wise DCT. At high levels of quantization, block-wise motion compensation and transform produces blocking artifacts in the decoded video, a form of distortion to which the human visual system is very sensitive. The latest video coding standard, H.264/AVC, introduces a deblocking filter to reduce the blocking artifacts. However, there is still visible distortion after the filtering when compared to the original video. In this paper, we propose a non-conventional filter to further reduce the distortion and to improve the decoded picture quality. Different from conventional filters, the proposed filter is based on a machine learning algorithm (decision tree). The decision trees are used to classify the filter's inputs and select the best filter coeffcients for the inputs. Experimental results with 4 × 4 DCT indicate that using the filter holds promise in improving the quality of H.264/AVC video sequences.

  16. A multipath video delivery scheme over diffserv wireless LANs

    NASA Astrophysics Data System (ADS)

    Man, Hong; Li, Yang

    2004-01-01

    This paper presents a joint source coding and networking scheme for video delivery over ad hoc wireless local area networks. The objective is to improve the end-to-end video quality with the constraint of the physical network. The proposed video transport scheme effectively integrates several networking components including load-aware multipath routing, class based queuing (CBQ), and scalable (or layered) video source coding techniques. A typical progressive video coder, 3D-SPIHT, is used to generate multi-layer source data streams. The coded bitstreams are then segmented into multiple sub-streams, each with a different level of importance towards the final video reconstruction. The underlay wireless ad hoc network is designed to support service differentiation. A contention sensitive load aware routing (CSLAR) protocol is proposed. The approach is to discover multiple routes between the source and the destination, and label each route with a load value which indicates its quality of service (QoS) characteristics. The video sub-streams will be distributed among these paths according to their QoS priority. CBQ is also applied to all intermediate nodes, which gives preference to important sub-streams. Through this approach, the scalable source coding techniques are incorporated with differentiated service (DiffServ) networking techniques so that the overall system performance is effectively improved. Simulations have been conducted on the network simulator (ns-2). Both network layer performance and application layer performance are evaluated. Significant improvements over traditional ad hoc wireless network transport schemes have been observed.

  17. Video transmission on ATM networks. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chen, Yun-Chung

    1993-01-01

    The broadband integrated services digital network (B-ISDN) is expected to provide high-speed and flexible multimedia applications. Multimedia includes data, graphics, image, voice, and video. Asynchronous transfer mode (ATM) is the adopted transport techniques for B-ISDN and has the potential for providing a more efficient and integrated environment for multimedia. It is believed that most broadband applications will make heavy use of visual information. The prospect of wide spread use of image and video communication has led to interest in coding algorithms for reducing bandwidth requirements and improving image quality. The major results of a study on the bridging of network transmission performance and video coding are: Using two representative video sequences, several video source models are developed. The fitness of these models are validated through the use of statistical tests and network queuing performance. A dual leaky bucket algorithm is proposed as an effective network policing function. The concept of the dual leaky bucket algorithm can be applied to a prioritized coding approach to achieve transmission efficiency. A mapping of the performance/control parameters at the network level into equivalent parameters at the video coding level is developed. Based on that, a complete set of principles for the design of video codecs for network transmission is proposed.

  18. Using Video Feedback to Improve Horseback-Riding Skills

    ERIC Educational Resources Information Center

    Kelley, Heather; Miltenberger, Raymond G.

    2016-01-01

    This study used video feedback to improve the horseback-riding skills of advanced beginning riders. We focused on 3 skill sets: those used in jumping over obstacles, dressage riding on the flat, and jumping position riding on the flat. Baseline consisted of standard lesson procedures. Intervention consisted of video feedback in which a recorded…

  19. Stereoscopic Video Microscope

    NASA Astrophysics Data System (ADS)

    Butterfield, James F.

    1980-11-01

    The new electronic technology of three-dimensional video combined with the established. science of microscopy has created. a new instrument. the Stereoscopic Video Microscope. The specimen is illuminated so the stereoscopic objective lens focuses the stereo-pair of images side-by-side on the video camera's pick-up, tube. The resulting electronic signal can be enhanced, digitized, colorized, quantified, its polarity reverse., and its gray scale expanJed non-linearally. The signal can be transmitted over distances and can be stored on video. tape for later playback. The electronic signal is converted to a stereo-pair of visual images on the video monitor's cathode-ray-tube. A stereo-hood is used to fuse the two images for three-dimensional viewing. The conventional optical microscope has definite limitations, many of which can be eliminated by converting the optical image to an electronic signal in the video microscope. The principal aHvantages of the Stereoscopic Video Microscope compared to the conventional optical microscope are: great ease of viewing; group viewing; ability to easily recohd; and, the capability of processing the electronic signal for video. enhancement. The applications cover nearly all fields of microscopy. These include: microelectronics assembly, inspection, and research; biological, metallurgical, and che.illical research; and other industrial and medical uses. The Stereo-scopic Video Microscope is particularly useful for instructional and recordkeeping purposes. The video microscope can be monoscopic or three dimensional.

  20. Chidi holographic video system

    NASA Astrophysics Data System (ADS)

    Nwodoh, Thomas A.; Benton, Stephen A.

    2000-03-01

    Holo-Chidi is a holographic video processing system designed at the MIT Media Laboratory for real-time computation of Computer Generated Holograms and the subsequent display of the holograms at video frame rates. It's processing engine is adapted from Chidi which is reconfigurable multimedia processing system used for real-time synthesis and analysis of digital video frames. Holo-Chidi is made of two main components: the sets of Chidi processor cards and the display video concentrator card. The processor cards are used for hologram computation while the display video concentrator card acts as frame buffer for the system. The display video concentrator also formats the computed holographic data and converts them to analog form for feeding the acousto-optic modulators of the Media Lab's Mark-II holographic display system. The display video concentrator card can display the computed holograms from the Chidi cards loaded from its high-speed I/O interface port or precomputed holograms loaded from a PC through the United Serial Bus port of its communications processor at above video refresh rates. This paper discusses the design of the display video concentrator used to display holographic video in the Mark-II system.

  1. New video pupillometer

    NASA Astrophysics Data System (ADS)

    McLaren, Jay W.; Fjerstad, Wayne H.; Ness, Anders B.; Graham, Matthew D.; Brubaker, Richard F.

    1995-03-01

    An instrument is developed to measure pupil diameter from both eyes in the dark. Each eye is monitored with a small IR video camera and pupil diameters are calculated from the video signal at a rate of 60 Hz. A processing circuit, designed around a video digitizer, a digital logic circuit, and a microcomputer, extracts pupil diameter from each video frame in real time. This circuit also highlights the detected outline of the pupil on a monitored video image of each eye. Diameters are exported to a host computer that displays, graphs, analyzes, and stores them as pupillograms. The host computer controls pupil measurements and can turn on a yellow light emitting diode mounted just above each video camera to excite the pupillary light reflex. We present examples of pupillograms to illustrate how this instrument is used to measure the pupillary light reflex and pupil motility in the dark.

  2. Video Toroid Cavity Imager

    SciTech Connect

    Gerald, Rex E. II; Sanchez, Jairo; Rathke, Jerome W.

    2004-08-10

    A video toroid cavity imager for in situ measurement of electrochemical properties of an electrolytic material sample includes a cylindrical toroid cavity resonator containing the sample and employs NMR and video imaging for providing high-resolution spectral and visual information of molecular characteristics of the sample on a real-time basis. A large magnetic field is applied to the sample under controlled temperature and pressure conditions to simultaneously provide NMR spectroscopy and video imaging capabilities for investigating electrochemical transformations of materials or the evolution of long-range molecular aggregation during cooling of hydrocarbon melts. The video toroid cavity imager includes a miniature commercial video camera with an adjustable lens, a modified compression coin cell imager with a fiat circular principal detector element, and a sample mounted on a transparent circular glass disk, and provides NMR information as well as a video image of a sample, such as a polymer film, with micrometer resolution.

  3. VideoANT: Extending Online Video Annotation beyond Content Delivery

    ERIC Educational Resources Information Center

    Hosack, Bradford

    2010-01-01

    This paper expands the boundaries of video annotation in education by outlining the need for extended interaction in online video use, identifying the challenges faced by existing video annotation tools, and introducing Video-ANT, a tool designed to create text-based annotations integrated within the time line of a video hosted online. Several…

  4. Video Eases End-of-Life Care Discussions

    Cancer.gov

    Patients with advanced cancer who watched a video that depicts options for end-of-life care were more certain of their end-of-life decision making than patients who only listened to a verbal narrative.

  5. Secure video communications systems

    SciTech Connect

    Smith, R.L.

    1991-10-08

    This patent describes a secure video communications system having at least one command network formed by a combination of subsystems. The combination of subsystems to include a video subsystem, an audio subsystem, a communications subsystem, and a control subsystem. The video communications system to be window driven and mouse operated, and having the ability to allow for secure point-to-point real-time teleconferencing.

  6. Sampling video compression system

    NASA Technical Reports Server (NTRS)

    Matsumoto, Y.; Lum, H. (Inventor)

    1977-01-01

    A system for transmitting video signal of compressed bandwidth is described. The transmitting station is provided with circuitry for dividing a picture to be transmitted into a plurality of blocks containing a checkerboard pattern of picture elements. Video signals along corresponding diagonal rows of picture elements in the respective blocks are regularly sampled. A transmitter responsive to the output of the sampling circuitry is included for transmitting the sampled video signals of one frame at a reduced bandwidth over a communication channel. The receiving station is provided with a frame memory for temporarily storing transmitted video signals of one frame at the original high bandwidth frequency.

  7. Enhanced video viewing from metadata

    NASA Astrophysics Data System (ADS)

    Janevski, Angel; McGee, Thomas; Agnihotri, Lalitha; Dimitrova, Nevenka

    2001-11-01

    Current advanced television concepts envision data broadcasting along with the video stream, which is used by interactive applications at the client end. In this case, these applications do not proactively personalize the experience and may not allow user requests for additional information. We propose content enhancement using automatic retrieval of additional information based on video content and user interests. Our paper describes Video Retriever Genie, a system that enhances content with additional information based on metadata that provides semantics for the content. The system is based on a digital TV (Philips TriMedia) platform. We enhance content through user queries that define information extraction tasks that retrieve information from the Web. We present several examples of content enhancement such as additional movie character/actor information, financial information and weather alerts. Our system builds a bridge between the traditional TV viewing and the domain of personal computing and Internet. The boundaries between these domains are dissolving and this system demonstrates one effective approach for content enhancement. In addition, we illustrate our discussion with examples from two existing standards - MPEG-7 and TV-Anytime.

  8. Clinical coding. Code breakers.

    PubMed

    Mathieson, Steve

    2005-02-24

    --The advent of payment by results has seen the role of the clinical coder pushed to the fore in England. --Examinations for a clinical coding qualification began in 1999. In 2004, approximately 200 people took the qualification. --Trusts are attracting people to the role by offering training from scratch or through modern apprenticeships. PMID:15768716

  9. The Video Guide. Second Edition.

    ERIC Educational Resources Information Center

    Bensinger, Charles

    Intended for both novice and experienced users, this guide is designed to inform and entertain the reader in unravelling the jargon surrounding video equipment and in following carefully delineated procedures for its use. Chapters include "Exploring the Video Universe,""A Grand Tour of Video Technology,""The Video System,""The Video Camera,""The…

  10. The Spaghetti City Video Manual; A Guide to Use, Repair, and Maintenance.

    ERIC Educational Resources Information Center

    1973

    Information on how to use, maintain, repair and modify portable video equipment is presented in this technical manual for nontechnicians. The four major sections are devoted to the theoretical foundations upon which video equipment is based, an introduction to video systems, basic maintenance, and advanced maintenance. The emphasis throughout the…

  11. Optical video disks with undulating tracks.

    PubMed

    Braat, J J; Bouwhuis, G

    1978-07-01

    The signal components of a video signal (luminance, color, and sound) are modulated on a main carrier and several subcarriers and then recorded on the video master disk. Apart from the signal distortion that can arise during master and disk manufacture, the optical readout of the disk also yields a nonlinear transfer of the signal. The result of nonlinearities is intermodulation between signal components. Intermodulation products affect the quality of the final TV picture. In this paper a method is described which reduces the contribution of the optical readout system to the intermodulation. An optical coding is introduced such that two signal components hardly influence one another. The spacing of the pits in the track direction carries the luminance information, while the undulation of the track carries the color or sound information. A quadrant photodetector positioned in the far field of the video disk restores the luminance and color or sound bands with a very low amount of intermodulation. PMID:20203718

  12. Codec and GOP Identification in Double Compressed Videos.

    PubMed

    Bestagini, Paolo; Milani, Simone; Tagliasacchi, Marco; Tubaro, Stefano

    2016-05-01

    Video content is routinely acquired and distributed in a digital compressed format. In many cases, the same video content is encoded multiple times. This is the typical scenario that arises when a video, originally encoded directly by the acquisition device, is then re-encoded, either after an editing operation, or when uploaded to a sharing website. The analysis of the bitstream reveals details of the last compression step (i.e., the codec adopted and the corresponding encoding parameters), while masking the previous compression history. Therefore, in this paper, we consider a processing chain of two coding steps, and we propose a method that exploits coding-based footprints to identify both the codec and the size of the group of pictures (GOPs) used in the first coding step. This sort of analysis is useful in video forensics, when the analyst is interested in determining the characteristics of the originating source device, and in video quality assessment, since quality is determined by the whole compression history. The proposed method relies on the fact that lossy coding is an (almost) idempotent operation. That is, re-encoding a video sequence with the same codec and coding parameters produces a sequence that is similar to the former. As a consequence, if the second codec in the chain does not significantly alter the sequence, it is possible to analyze this sort of similarity to identify the first codec and the adopted GOP size. The method was extensively validated on a very large data set of video sequences generated by encoding content with a diversity of codecs (MPEG-2, MPEG-4, H.264/AVC, and DIRAC) and different encoding parameters. In addition, a proof of concept showing that the proposed method can also be used on videos downloaded from YouTube is reported. PMID:26992023

  13. User-oriented summary extraction for soccer video based on multimodal analysis

    NASA Astrophysics Data System (ADS)

    Liu, Huayong; Jiang, Shanshan; He, Tingting

    2011-11-01

    An advanced user-oriented summary extraction method for soccer video is proposed in this work. Firstly, an algorithm of user-oriented summary extraction for soccer video is introduced. A novel approach that integrates multimodal analysis, such as extraction and analysis of the stadium features, moving object features, audio features and text features is introduced. By these features the semantic of the soccer video and the highlight mode are obtained. Then we can find the highlight position and put them together by highlight degrees to obtain the video summary. The experimental results for sports video of world cup soccer games indicate that multimodal analysis is effective for soccer video browsing and retrieval.

  14. A Video Review.

    ERIC Educational Resources Information Center

    Tanner, Jacquelyn

    1983-01-01

    Describes where video productions can be found for use in the classroom: general sources, names and addresses of instructors who have produced materials for commercial distribution, names of instructors who have developed specific techniques for using video, names and addresses of companies that provide adaptable materials for the classroom, and…

  15. Video Discs in Education.

    ERIC Educational Resources Information Center

    Barker, Philip

    1986-01-01

    This discussion of the use of images in learning processes focuses on recent developments in optical storage disc technology, particularly compact disc read-only (CD-ROM) and optical video discs. Interactive video systems and user interfaces are described, and applications in education and industry in the United Kingdom are reviewed. (Author/LRW)

  16. Video Communication Program.

    ERIC Educational Resources Information Center

    Haynes, Leonard Stanley

    This thesis describes work done as part of the Video Console Indexing Project (VICI), a program to improve the quality and reduce the time and work involved in indexing documents. The objective of the work described was to design a video terminal system which could be connected to a main computer to provide rapid natural communication between the…

  17. Digital Video Editing

    ERIC Educational Resources Information Center

    McConnell, Terry

    2004-01-01

    Monica Adams, head librarian at Robinson Secondary in Fairfax country, Virginia, states that librarians should have the technical knowledge to support projects related to digital video editing. The process of digital video editing and the cables, storage issues and the computer system with software is described.

  18. Creating Photomontage Videos

    ERIC Educational Resources Information Center

    Nitzberg, Kevan

    2008-01-01

    Several years ago, the author began exploring the use of digital film and video as an art-making media when he took over instructing the video computer art class at the high school where he teaches. He found numerous ways to integrate a variety of multimedia technologies and software with more traditional types of visual art processes and…

  19. The Value of Video

    ERIC Educational Resources Information Center

    Thompson, Douglas E.

    2011-01-01

    Video connects sight and sound, creating a composite experience greater than either alone. More than any other single technology, video is the most powerful way to communicate with others--and an ideal medium for sharing with others the vital learning occurring in music classrooms. In this article, the author leads readers through the process of…

  20. The Video Generation.

    ERIC Educational Resources Information Center

    Provenzo, Eugene F., Jr.

    1992-01-01

    Video games are neither neutral nor harmless but represent very specific social and symbolic constructs. Research on the social content of today's video games reveals that sex bias and gender stereotyping are widely evident throughout the Nintendo games. Violence and aggression also pervade the great majority of the games. (MLF)

  1. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2003-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  2. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2002-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  3. Video Image Stabilization and Registration

    NASA Astrophysics Data System (ADS)

    Hathaway, David H.; Meyer, Paul J.

    2002-10-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  4. Video image position determination

    NASA Astrophysics Data System (ADS)

    Christensen, W.; Anderson, F. L.; Kortegaard, B. L.

    1990-04-01

    The present invention generally relates to the control of video and optical information and, more specifically, to control systems utilizing video images to provide control. Accurate control of video images and laser beams is becoming increasingly important as the use of lasers for machine, medical, and experimental processes escalates. In AURORA, an installation at Los Alamos National Laboratory dedicated to laser fusion research, it is necessary to precisely control the path and angle of up to 96 laser beams. This invention is comprised of an optical beam position controller in which a video camera captures an image of the beam in its video frames, and conveys those images to a processing board which calculates the centroid coordinates for the image. The image coordinates are used by motor controllers and stepper motors to position the beam in a predetermined alignment. In one embodiment, system noise, used in conjunction with Bernoulli trials, yields higher resolution centroid coordinates.

  5. Automatic Association of Chats and Video Tracks for Activity Learning and Recognition in Aerial Video Surveillance

    PubMed Central

    Hammoud, Riad I.; Sahin, Cem S.; Blasch, Erik P.; Rhodes, Bradley J.; Wang, Tao

    2014-01-01

    We describe two advanced video analysis techniques, including video-indexed by voice annotations (VIVA) and multi-media indexing and explorer (MINER). VIVA utilizes analyst call-outs (ACOs) in the form of chat messages (voice-to-text) to associate labels with video target tracks, to designate spatial-temporal activity boundaries and to augment video tracking in challenging scenarios. Challenging scenarios include low-resolution sensors, moving targets and target trajectories obscured by natural and man-made clutter. MINER includes: (1) a fusion of graphical track and text data using probabilistic methods; (2) an activity pattern learning framework to support querying an index of activities of interest (AOIs) and targets of interest (TOIs) by movement type and geolocation; and (3) a user interface to support streaming multi-intelligence data processing. We also present an activity pattern learning framework that uses the multi-source associated data as training to index a large archive of full-motion videos (FMV). VIVA and MINER examples are demonstrated for wide aerial/overhead imagery over common data sets affording an improvement in tracking from video data alone, leading to 84% detection with modest misdetection/false alarm results due to the complexity of the scenario. The novel use of ACOs and chat messages in video tracking paves the way for user interaction, correction and preparation of situation awareness reports. PMID:25340453

  6. Automatic association of chats and video tracks for activity learning and recognition in aerial video surveillance.

    PubMed

    Hammoud, Riad I; Sahin, Cem S; Blasch, Erik P; Rhodes, Bradley J; Wang, Tao

    2014-01-01

    We describe two advanced video analysis techniques, including video-indexed by voice annotations (VIVA) and multi-media indexing and explorer (MINER). VIVA utilizes analyst call-outs (ACOs) in the form of chat messages (voice-to-text) to associate labels with video target tracks, to designate spatial-temporal activity boundaries and to augment video tracking in challenging scenarios. Challenging scenarios include low-resolution sensors, moving targets and target trajectories obscured by natural and man-made clutter. MINER includes: (1) a fusion of graphical track and text data using probabilistic methods; (2) an activity pattern learning framework to support querying an index of activities of interest (AOIs) and targets of interest (TOIs) by movement type and geolocation; and (3) a user interface to support streaming multi-intelligence data processing. We also present an activity pattern learning framework that uses the multi-source associated data as training to index a large archive of full-motion videos (FMV). VIVA and MINER examples are demonstrated for wide aerial/overhead imagery over common data sets affording an improvement in tracking from video data alone, leading to 84% detection with modest misdetection/false alarm results due to the complexity of the scenario. The novel use of ACOs and chat Sensors 2014, 14 19844 messages in video tracking paves the way for user interaction, correction and preparation of situation awareness reports. PMID:25340453

  7. Advanced Architectures for Astrophysical Supercomputing

    NASA Astrophysics Data System (ADS)

    Barsdell, B. R.; Barnes, D. G.; Fluke, C. J.

    2010-12-01

    Astronomers have come to rely on the increasing performance of computers to reduce, analyze, simulate and visualize their data. In this environment, faster computation can mean more science outcomes or the opening up of new parameter spaces for investigation. If we are to avoid major issues when implementing codes on advanced architectures, it is important that we have a solid understanding of our algorithms. A recent addition to the high-performance computing scene that highlights this point is the graphics processing unit (GPU). The hardware originally designed for speeding-up graphics rendering in video games is now achieving speed-ups of O(100×) in general-purpose computation - performance that cannot be ignored. We are using a generalized approach, based on the analysis of astronomy algorithms, to identify the optimal problem-types and techniques for taking advantage of both current GPU hardware and future developments in computing architectures.

  8. Game on, science - how video game technology may help biologists tackle visualization challenges.

    PubMed

    Lv, Zhihan; Tek, Alex; Da Silva, Franck; Empereur-mot, Charly; Chavent, Matthieu; Baaden, Marc

    2013-01-01

    The video games industry develops ever more advanced technologies to improve rendering, image quality, ergonomics and user experience of their creations providing very simple to use tools to design new games. In the molecular sciences, only a small number of experts with specialized know-how are able to design interactive visualization applications, typically static computer programs that cannot easily be modified. Are there lessons to be learned from video games? Could their technology help us explore new molecular graphics ideas and render graphics developments accessible to non-specialists? This approach points to an extension of open computer programs, not only providing access to the source code, but also delivering an easily modifiable and extensible scientific research tool. In this work, we will explore these questions using the Unity3D game engine to develop and prototype a biological network and molecular visualization application for subsequent use in research or education. We have compared several routines to represent spheres and links between them, using either built-in Unity3D features or our own implementation. These developments resulted in a stand-alone viewer capable of displaying molecular structures, surfaces, animated electrostatic field lines and biological networks with powerful, artistic and illustrative rendering methods. We consider this work as a proof of principle demonstrating that the functionalities of classical viewers and more advanced novel features could be implemented in substantially less time and with less development effort. Our prototype is easily modifiable and extensible and may serve others as starting point and platform for their developments. A webserver example, standalone versions for MacOS X, Linux and Windows, source code, screen shots, videos and documentation are available at the address: http://unitymol.sourceforge.net/. PMID:23483961

  9. Game On, Science - How Video Game Technology May Help Biologists Tackle Visualization Challenges

    PubMed Central

    Da Silva, Franck; Empereur-mot, Charly; Chavent, Matthieu; Baaden, Marc

    2013-01-01

    The video games industry develops ever more advanced technologies to improve rendering, image quality, ergonomics and user experience of their creations providing very simple to use tools to design new games. In the molecular sciences, only a small number of experts with specialized know-how are able to design interactive visualization applications, typically static computer programs that cannot easily be modified. Are there lessons to be learned from video games? Could their technology help us explore new molecular graphics ideas and render graphics developments accessible to non-specialists? This approach points to an extension of open computer programs, not only providing access to the source code, but also delivering an easily modifiable and extensible scientific research tool. In this work, we will explore these questions using the Unity3D game engine to develop and prototype a biological network and molecular visualization application for subsequent use in research or education. We have compared several routines to represent spheres and links between them, using either built-in Unity3D features or our own implementation. These developments resulted in a stand-alone viewer capable of displaying molecular structures, surfaces, animated electrostatic field lines and biological networks with powerful, artistic and illustrative rendering methods. We consider this work as a proof of principle demonstrating that the functionalities of classical viewers and more advanced novel features could be implemented in substantially less time and with less development effort. Our prototype is easily modifiable and extensible and may serve others as starting point and platform for their developments. A webserver example, standalone versions for MacOS X, Linux and Windows, source code, screen shots, videos and documentation are available at the address: http://unitymol.sourceforge.net/. PMID:23483961

  10. Stereoscopic video compression using temporal scalability

    NASA Astrophysics Data System (ADS)

    Puri, Atul; Kollarits, Richard V.; Haskell, Barry G.

    1995-04-01

    Despite the fact that human ability to perceive a high degree of realism is directly related to our ability to perceive depth accurately in a scene, most of the commonly used imaging and display technologies are able to provide only a 2D rendering of the 3D real world. Many current as well as emerging applications in areas of entertainment, remote operations, industrial and medicine can benefit from the depth perception offered by stereoscopic video systems which employ two views of a scene imaged under the constraints imposed by human visual system. Among the many challenges to be overcome for practical realization and widespread use of 3D/stereoscopic systems are efficient techniques for digital compression of enormous amounts of data while maintaining compatibility with normal video decoding and display systems. After a brief discussion on the relationship of digital stereoscopic 3DTV with digital TV and HDTV, we present an overview of tools in the MPEG-2 video standard that are relevant to our discussion on compression of stereoscopic video, which is the main topic of this paper. Next, we determine ways in which temporal scalability concepts can be applied to exploit redundancies inherent between the two views of a scene comprising stereoscopic video. Due consideration is given to masking properties of stereoscopic vision to determine bandwidth partitioning between the two views to realize an efficient coding scheme while providing sufficient quality. Simulations are performed on stereoscopic video of normal TV resolution to compare the performance of the two temporal scalability configurations with each other and with the simulcast solution. Preliminary results are quite promising and indicate that the configuration that exploits motion and disparity compensation significantly outperforms the one that exploits disparity compensation alone. Compression of both views of stereo video of normal TV resolution appears feasible in a total of 8 or 9 Mbit/s. Finally

  11. Lossless Video Sequence Compression Using Adaptive Prediction

    NASA Technical Reports Server (NTRS)

    Li, Ying; Sayood, Khalid

    2007-01-01

    We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies.

  12. Interventional video tomography

    NASA Astrophysics Data System (ADS)

    Truppe, Michael J.; Pongracz, Ferenc; Ploder, Oliver; Wagner, Arne; Ewers, Rolf

    1995-05-01

    Interventional Video Tomography (IVT) is a new imaging modality for Image Directed Surgery to visualize in real-time intraoperatively the spatial position of surgical instruments relative to the patient's anatomy. The video imaging detector is based on a special camera equipped with an optical viewing and lighting system and electronic 3D sensors. When combined with an endoscope it is used for examining the inside of cavities or hollow organs of the body from many different angles. The surface topography of objects is reconstructed from a sequence of monocular video or endoscopic images. To increase accuracy and speed of the reconstruction the relative movement between objects and endoscope is continuously tracked by electronic sensors. The IVT image sequence represents a 4D data set in stereotactic space and contains image, surface topography and motion data. In ENT surgery an IVT image sequence of the planned and so far accessible surgical path is acquired prior to surgery. To simulate the surgical procedure the cross sectional imaging data is superimposed with the digitally stored IVT image sequence. During surgery the video sequence component of the IVT simulation is substituted by the live video source. The IVT technology makes obsolete the use of 3D digitizing probes for the patient image coordinate transformation. The image fusion of medical imaging data with live video sources is the first practical use of augmented reality in medicine. During surgery a head-up display is used to overlay real-time reformatted cross sectional imaging data with the live video image.

  13. Endoscopic video manifolds.

    PubMed

    Atasoy, Selen; Mateus, Diana; Lallemand, Joe; Meining, Alexander; Yang, Guang-Zhong; Navab, Nassir

    2010-01-01

    Postprocedural analysis of gastrointestinal (GI) endoscopic videos is a difficult task because the videos often suffer from a large number of poor-quality frames due to the motion or out-of-focus blur, specular highlights and artefacts caused by turbid fluid inside the GI tract. Clinically, each frame of the video is examined individually by the endoscopic expert due to the lack of a suitable visualisation technique. In this work, we introduce a low dimensional representation of endoscopic videos based on a manifold learning approach. The introduced endoscopic video manifolds (EVMs) enable the clustering of poor-quality frames and grouping of different segments of the GI endoscopic video in an unsupervised manner to facilitate subsequent visual assessment. In this paper, we present two novel inter-frame similarity measures for manifold learning to create structured manifolds from complex endoscopic videos. Our experiments demonstrate that the proposed method yields high precision and recall values for uninformative frame detection (90.91% and 82.90%) and results in well-structured manifolds for scene clustering. PMID:20879345

  14. Notions of Video Game Addiction and Their Relation to Self-Reported Addiction among Players of World of Warcraft

    ERIC Educational Resources Information Center

    Oggins, Jean; Sammis, Jeffrey

    2012-01-01

    In this study, 438 players of the online video game, World of Warcraft, completed a survey about video game addiction and answered an open-ended question about behaviors they considered characteristic of video game addiction. Responses were coded and correlated with players' self-reports of being addicted to games and scores on a modified video…

  15. A unified framework for optimal multiple video object bit allocation

    NASA Astrophysics Data System (ADS)

    Chen, Zhenzhong; Ngan, King Ngi

    2005-07-01

    MPEG-4 supports object-level video coding. It is a challenge to design an optimal bit allocation strategy which considers not only how to distribute bits among multiple video objects (MVO's) but also how to achieve optimization between the texture and shape information. In this paper, we present a uniform framework for optimal multiple video object bit allocation in MPEG-4. We combine the rate-distortion (R-D) models for the texture and shape information of arbitrarily shaped video objects to develop the joint texture-shape rate-distortion models. The dynamic programming (DP) technique is applied to optimize the bit allocation for the multiple video objects. The simulation results demonstrate that the proposed joint texture-shape optimization algorithm outperforms the MPEG-4 verification model on the decoded picture quality.

  16. Portrayal of Smokeless Tobacco in YouTube Videos

    PubMed Central

    Augustson, Erik M.; Backinger, Cathy L.

    2012-01-01

    Objectives: Videos of smokeless tobacco (ST) on YouTube are abundant and easily accessible, yet no studies have examined the content of ST videos. This study assesses the overall portrayal, genre, and messages of ST YouTube videos. Methods: In August 2010, researchers identified the top 20 search results on YouTube by “relevance” and “view count” for the following search terms: “ST,” “chewing tobacco,” “snus,” and “Skoal.” After eliminating videos that were not about ST (n = 26), non-English (n = 14), or duplicate (n = 42), a final sample of 78 unique videos was coded for overall portrayal, genre, and various content measures. Results: Among the 78 unique videos, 15.4% were anti-ST, while 74.4% were pro-ST. Researchers were unable to determine the portrayal of ST in the remaining 10.3% of videos because they involved excessive or “sensationalized” use of the ST, which could be interpreted either positively or negatively, depending on the viewer. The most common ST genre was positive video diaries (or “vlogs”), which made up almost one third of the videos (29.5%), followed by promotional advertisements (20.5%) and anti-ST public service announcements (12.8%). While YouTube is intended for user-generated content, 23.1% of the videos were created by professional organizations. Conclusions: These results demonstrate that ST videos on YouTube are overwhelmingly pro-ST. More research is needed to determine who is viewing these ST YouTube videos and how they may affect people’s knowledge, attitudes, and behaviors regarding ST use. PMID:22080585

  17. Astronomical Video Suites

    NASA Astrophysics Data System (ADS)

    Francisco Salgado, Jose

    2010-01-01

    Astronomer and visual artist Jose Francisco Salgado has directed two astronomical video suites to accompany live performances of classical music works. The suites feature awe-inspiring images, historical illustrations, and visualizations produced by NASA, ESA, and the Adler Planetarium. By the end of 2009, his video suites Gustav Holst's The Planets and Astronomical Pictures at an Exhibition will have been presented more than 40 times in over 10 countries. Lately Salgado, an avid photographer, has been experimenting with high dynamic range imaging, time-lapse, infrared, and fisheye photography, as well as with stereoscopic photography and video to enhance his multimedia works.

  18. Video image cliff notes

    NASA Astrophysics Data System (ADS)

    Szu, Harold; Hsu, Charles

    2012-06-01

    Can a compressive sampling expert system help to build a summary of a video in a composited picture? Digital Internet age has provided all with an information degree of freedom; but, comes with it, the societal trash being accumulated beyond analysts to sort through, to summary video automatically as a digital library category. While we wish preserve the spirit of democratic Smartphone-Internet to all, we provide an automation and unbiased tool called the compressive sampling expert system (CSpES) to summarize the video content at user's own discretion.

  19. Video Analysis with a Web Camera

    NASA Astrophysics Data System (ADS)

    Wyrembeck, Edward P.

    2009-01-01

    Recent advances in technology have made video capture and analysis in the introductory physics lab even more affordable and accessible. The purchase of a relatively inexpensive web camera is all you need if you already have a newer computer and Vernier's2 Logger Pro 3 software. In addition to Logger Pro 3, other video analysis tools such as Videopoint3 and Tracker,4 which is freely downloadable, by Doug Brown could also be used. I purchased Logitech's5 QuickCam Pro 4000 web camera for 99 after Rick Sorensen6 at Vernier Software and Technology recommended it for computers using a Windows platform. Once I had mounted the web camera on a mobile computer with Velcro and installed the software, I was ready to capture motion video and analyze it.

  20. Seals Flow Code Development

    NASA Technical Reports Server (NTRS)

    1991-01-01

    In recognition of a deficiency in the current modeling capability for seals, an effort was established by NASA to develop verified computational fluid dynamic concepts, codes, and analyses for seals. The objectives were to develop advanced concepts for the design and analysis of seals, to effectively disseminate the information to potential users by way of annual workshops, and to provide experimental verification for the models and codes under a wide range of operating conditions.

  1. A new video codec based on 3D-DTCWT and vector SPIHT

    NASA Astrophysics Data System (ADS)

    Xu, Ruiping; Li, Huifang; Xie, Sunyun

    2011-10-01

    In this paper, a new video coding system combining 3-D complex dual-tree discrete wavelet transform with vector SPIHT and arithmetic coding is proposed, and tested on standard video sequences. First the 3-D DTCWT of each color component is performed for video sequences. Then the wavelet coefficients are grouped to form vector, and successive refinement vector quantization techniques is used to quantize the groups. Finally experimental results are given. It shows that the proposed video codec provides better performance than the 3D-DTCWT and 3D-SPIHT codec, and the superior performance for the proposed sheme lies in not performing motion compensation.

  2. CameraCast: flexible access to remote video sensors

    NASA Astrophysics Data System (ADS)

    Kong, Jiantao; Ganev, Ivan; Schwan, Karsten; Widener, Patrick

    2007-01-01

    New applications like remote surveillance and online environmental or traffic monitoring are making it increasingly important to provide flexible and protected access to remote video sensor devices. Current systems use application-level codes like web-based solutions to provide such access. This requires adherence to user-level APIs provided by such services, access to remote video information through given application-specific service and server topologies, and that the data being captured and distributed is manipulated by third party service codes. CameraCast is a simple, easily used system-level solution to remote video access. It provides a logical device API so that an application can identically operate on local vs. remote video sensor devices, using its own service and server topologies. In addition, the application can take advantage of API enhancements to protect remote video information, using a capability-based model for differential data protection that offers fine grain control over the information made available to specific codes or machines, thereby limiting their ability to violate privacy or security constraints. Experimental evaluations of CameraCast show that the performance of accessing remote video information approximates that of accesses to local devices, given sufficient networking resources. High performance is also attained when protection restrictions are enforced, due to an efficient kernel-level realization of differential data protection.

  3. Error-resilient method for robust video transmissions

    NASA Astrophysics Data System (ADS)

    Choi, Dong-Hwan; Lim, Tae-Gyun; Lee, Sang-Hak; Hwang, Chan-Sik

    2003-06-01

    In this paper we address the problems of video transmission in error prone environments. A novel error-resilient method is proposed that uses a data embedding scheme for header parameters in video coding standards, such as MPEG-2 and H.263. In case of requiring taking the loss of data information into account except for header errors, the video decoder hides visual degradation as well as possible, employing an error concealment method using an affine transform. Header information is very important because syntax elements, tables, and decoding processes all depend on the values of the header information. Therefore, transmission errors in header information can result in serious visual degradation of the output video and also cause an abnormal decoding process. In the proposed method, the header parameters are embedded into the least significant bits (LSB) of the quantized DCT coefficients. Then, when errors occur in the header field of the compressed bitstream, the decoder can accurately recover the corrupted header parameters if the embedded information is extracted correctly. The error concealment technique employed in this paper uses motion estimation considering actual motions, such as rotation, magnification, reduction, and parallel motion, in moving pictures. Experimental results show that the proposed error-resilient method can effectively reconstruct the original video sequence without any additional bits or modifications to the video coding standard and the error concealment method can produce a higher PSNR value and better subjective video quality, estimating the motion of lost data more accurately.

  4. Rap video vs. traditional video for teaching nutrition.

    PubMed

    Connelly, J O; Berryman, T; Tolley, E A

    1996-01-01

    This study compared the effectiveness of a rap video with a traditional video in providing nutrition information. Sixty pregnant African-American females (ages 14 through 18) were randomly assigned to view either a rap video or a traditional video about good nutrition. The data revealed no significant difference in scores between the two versions; both videos produced significant learning; and 17 and 18 year olds scored higher than 15 and 16 year olds. PMID:16764122

  5. Video Views and Reviews

    ERIC Educational Resources Information Center

    Watters, Christopher D.

    2003-01-01

    This article reviews three "Molecular Biology of the Cell" movies. These include videos on nuclear dynamics and nuclear localization signals, spindle and chromosomal movements during mitosis, and fibroblast motility and substrate adhesiveness. (Contains 5 figures.)

  6. How Video Can Help.

    ERIC Educational Resources Information Center

    Torrence, David R.

    1985-01-01

    The author presents suggestions concerning the use of video in training programs. Suggestions involve viewing angles, use of humor or animation, models, subtitles and repetition, note taking, feedback, number of viewers, visual and auditory distractions, and use of data. (CT)

  7. IRIS First Light Video

    NASA Video Gallery

    First Interface Region Imaging Spectrograph (IRIS) movie, 21 hours after opening the telescope door. This video has been slowed forty percent and looped four times to show greater detail. Credit: N...

  8. Analyzing crime scene videos

    NASA Astrophysics Data System (ADS)

    Cunningham, Cindy C.; Peloquin, Tracy D.

    1999-02-01

    Since late 1996 the Forensic Identification Services Section of the Ontario Provincial Police has been actively involved in state-of-the-art image capture and the processing of video images extracted from crime scene videos. The benefits and problems of this technology for video analysis are discussed. All analysis is being conducted on SUN Microsystems UNIX computers, networked to a digital disk recorder that is used for video capture. The primary advantage of this system over traditional frame grabber technology is reviewed. Examples from actual cases are presented and the successes and limitations of this approach are explored. Suggestions to companies implementing security technology plans for various organizations (banks, stores, restaurants, etc.) will be made. Future directions for this work and new technologies are also discussed.

  9. NREL Buildings Research Video

    ScienceCinema

    None

    2013-05-29

    Through research, the National Renewable Energy Laboratory (NREL) has developed many strategies and design techniques to ensure both commercial and residential buildings use as little energy as possible and also work well with the surroundings. Here you will find a video that introduces the work of NREL Buildings Research, highlights some of the facilities on the NREL campus, and demonstrates these efficient building strategies. Watch this video to see design highlights of the Science and Technology Facility on the NREL campus?the first Federal building to be LEED® Platinum certified. Additionally, the video demonstrates the energy-saving features of NRELs Thermal Test Facility. For a text version of this video visit http://www.nrel.gov/buildings/about_research_text_version.html

  10. Latest Highlights from our Direct Measurement Video Collection

    NASA Astrophysics Data System (ADS)

    Vonk, M.; Bohacek, P. H.

    2014-12-01

    Recent advances in technology have made videos much easier to produce, edit, store, transfer, and view. This has spawned an explosion in a production of a wide variety of different types of pedagogical videos. But with the exception of student-made videos (which are often of poor quality) almost all of the educational videos being produced are passive. No matter how compelling the content, students are expected to simply sit and watch them. Because we feel that being engaged and active are necessary components of student learning, we have been working to create a free online library of Direct Measurement Videos (DMV's). These videos are short high-quality videos of real events, shot in a way that allows students to make measurements directly from the video. Instead of handing students a word problem about a car skidding on ice, we actually show them the car skidding on ice. We then ask them to measure the important quantities, make calculations based on those measurements and solve for unknowns. DMV's are more interesting than their word problem equivalents and frequently inspire further questions about the physics of the situation or about the uncertainty of the measurement in ways that word problems almost never do. We feel that it is simply impossible to a video of a roller coaster or a rocket and then argue that word problems are better. In this talk I will highlight some new additions to our DMV collection. This work is supported by NSF TUES award #1245268

  11. Learning from Online Video Lectures

    ERIC Educational Resources Information Center

    Brecht, H. David

    2012-01-01

    This study empirically examines the instructional value of online video lectures--videos that a course's instructor prepares to supplement classroom or online-broadcast lectures. The study examines data from a classroom course, where the videos have a slower, more step-by-step lecture style than the classroom lectures; student use of videos is…

  12. Industrial-Strength Streaming Video.

    ERIC Educational Resources Information Center

    Avgerakis, George; Waring, Becky

    1997-01-01

    Corporate training, financial services, entertainment, and education are among the top applications for streaming video servers, which send video to the desktop without downloading the whole file to the hard disk, saving time and eliminating copyrights questions. Examines streaming video technology, lists ten tips for better net video, and ranks…

  13. Video face recognition against a watch list

    NASA Astrophysics Data System (ADS)

    Abbas, Jehanzeb; Dagli, Charlie K.; Huang, Thomas S.

    2007-10-01

    Due to a large increase in the video surveillance data recently in an effort to maintain high security at public places, we need more robust systems to analyze this data and make tasks like face recognition a realistic possibility in challenging environments. In this paper we explore a watch-list scenario where we use an appearance based model to classify query faces from low resolution videos into either a watch-list or a non-watch-list face. We then use our simple yet a powerful face recognition system to recognize the faces classified as watch-list faces. Where the watch-list includes those people that we are interested in recognizing. Our system uses simple feature machine algorithms from our previous work to match video faces against still images. To test our approach, we match video faces against a large database of still images obtained from a previous work in the field from Yahoo News over a period of time. We do this matching in an efficient manner to come up with a faster and nearly real-time system. This system can be incorporated into a larger surveillance system equipped with advanced algorithms involving anomalous event detection and activity recognition. This is a step towards more secure and robust surveillance systems and efficient video data analysis.

  14. Fluorescence endoscopic video system

    NASA Astrophysics Data System (ADS)

    Papayan, G. V.; Kang, Uk

    2006-10-01

    This paper describes a fluorescence endoscopic video system intended for the diagnosis of diseases of the internal organs. The system operates on the basis of two-channel recording of the video fluxes from a fluorescence channel and a reflected-light channel by means of a high-sensitivity monochrome television camera and a color camera, respectively. Examples are given of the application of the device in gastroenterology.

  15. Video Editing System

    NASA Technical Reports Server (NTRS)

    Schlecht, Leslie E.; Kutler, Paul (Technical Monitor)

    1998-01-01

    This is a proposal for a general use system based, on the SGI IRIS workstation platform, for recording computer animation to videotape. In addition, this system would provide features for simple editing and enhancement. Described here are a list of requirements for the system, and a proposed configuration including the SGI VideoLab Integrator, VideoMedia VLAN animation controller and the Pioneer rewritable laserdisc recorder.

  16. Video image position determination

    DOEpatents

    Christensen, Wynn; Anderson, Forrest L.; Kortegaard, Birchard L.

    1991-01-01

    An optical beam position controller in which a video camera captures an image of the beam in its video frames, and conveys those images to a processing board which calculates the centroid coordinates for the image. The image coordinates are used by motor controllers and stepper motors to position the beam in a predetermined alignment. In one embodiment, system noise, used in conjunction with Bernoulli trials, yields higher resolution centroid coordinates.

  17. Violence in Teen-Rated Video Games

    PubMed Central

    Haninger, Kevin; Ryan, M. Seamus; Thompson, Kimberly M

    2004-01-01

    Context: Children's exposure to violence in the media remains a source of public health concern; however, violence in video games rated T (for “Teen”) by the Entertainment Software Rating Board (ESRB) has not been quantified. Objective: To quantify and characterize the depiction of violence and blood in T-rated video games. According to the ESRB, T-rated video games may be suitable for persons aged 13 years and older and may contain violence, mild or strong language, and/or suggestive themes. Design: We created a database of all 396 T-rated video game titles released on the major video game consoles in the United States by April 1, 2001 to identify the distribution of games by genre and to characterize the distribution of content descriptors for violence and blood assigned to these games. We randomly sampled 80 game titles (which included 81 games because 1 title included 2 separate games), played each game for at least 1 hour, and quantitatively assessed the content. Given the release of 2 new video game consoles, Microsoft Xbox and Nintendo GameCube, and a significant number of T-rated video games released after we drew our random sample, we played and assessed 9 additional games for these consoles. Finally, we assessed the content of 2 R-rated films, The Matrix and The Matrix: Reloaded, associated with the T-rated video game Enter the Matrix. Main Outcome Measures: Game genre; percentage of game play depicting violence; depiction of injury; depiction of blood; number of human and nonhuman fatalities; types of weapons used; whether injuring characters, killing characters, or destroying objects is rewarded or is required to advance in the game; and content that may raise concerns about marketing T-rated video games to children. Results: Based on analysis of the 396 T-rated video game titles, 93 game titles (23%) received content descriptors for both violence and blood, 280 game titles (71%) received only a content descriptor for violence, 9 game titles (2

  18. Secure authenticated video equipment

    SciTech Connect

    Doren, N.E.

    1993-07-01

    In the verification technology arena, there is a pressing need for surveillance and monitoring equipment that produces authentic, verifiable records of observed activities. Such a record provides the inspecting party with confidence that observed activities occurred as recorded, without undetected tampering or spoofing having taken place. The secure authenticated video equipment (SAVE) system provides an authenticated series of video images of an observed activity. Being self-contained and portable, it can be installed as a stand-alone surveillance system or used in conjunction with existing monitoring equipment in a non-invasive manner. Security is provided by a tamper-proof camera enclosure containing a private, electronic authentication key. Video data is transferred communication link consisting of a coaxial cable, fiber-optic link or other similar media. A video review station, located remotely from the camera, receives, validates, displays and stores the incoming data. Video data is validated within the review station using a public key, a copy of which is held by authorized panics. This scheme allows the holder of the public key to verify the authenticity of the recorded video data but precludes undetectable modification of the data generated by the tamper-protected private authentication key.

  19. Frame architecture for video servers

    NASA Astrophysics Data System (ADS)

    Venkatramani, Chitra; Kienzle, Martin G.

    1999-11-01

    Video is inherently frame-oriented and most applications such as commercial video processing require to manipulate video in terms of frames. However, typical video servers treat videos as byte streams and perform random access based on approximate byte offsets to be supplied by the client. They do not provide frame or timecode oriented API which is essential for many applications. This paper describes a frame-oriented architecture for video servers. It also describes the implementation in the context of IBM's VideoCharger server. The later part of the paper describes an application that uses the frame architecture and provides fast and slow-motion scanning capabilities to the server.

  20. The Effect of Online Violent Video Games on Levels of Aggression

    PubMed Central

    Hollingdale, Jack; Greitemeyer, Tobias

    2014-01-01

    Background In recent years the video game industry has surpassed both the music and video industries in sales. Currently violent video games are among the most popular video games played by consumers, most specifically First-Person Shooters (FPS). Technological advancements in game play experience including the ability to play online has accounted for this increase in popularity. Previous research, utilising the General Aggression Model (GAM), has identified that violent video games increase levels of aggression. Little is known, however, as to the effect of playing a violent video game online. Methods/Principal Findings Participants (N = 101) were randomly assigned to one of four experimental conditions; neutral video game—offline, neutral video game—online, violent video game—offline and violent video game—online. Following this they completed questionnaires to assess their attitudes towards the game and engaged in a chilli sauce paradigm to measure behavioural aggression. The results identified that participants who played a violent video game exhibited more aggression than those who played a neutral video game. Furthermore, this main effect was not particularly pronounced when the game was played online. Conclusions/Significance These findings suggest that both playing violent video games online and offline compared to playing neutral video games increases aggression. PMID:25391143

  1. Authorized MPEG-4 video fruition via watermarking recovering and smart card certification

    NASA Astrophysics Data System (ADS)

    Caldelli, Roberto; Bartolini, Franco

    2003-01-01

    A client-server application for MPEG-4 video distribution, in a VOD (Video-On-Demand) infrastructure, has been built up granting authorized fruition by means of digital watermarking. Once the consumer has chosen to watch a program, his smart card code, plugged in the set top box, is sent back to the server side. This code will be embedded, exactly at that time, in the video sequence before streaming it towards the client through the network (the code will be obviously used for payment too). The client side, adequately equipped with the watermark detector, receives the video and checks it by extracting the identifying code, this one is matched with the code located in the end user smart card and if the comparison is right rendering is allowed, otherwise decoding is stopped and fruition is inhibited.

  2. On the coding of interlace scanned content in HEVC

    NASA Astrophysics Data System (ADS)

    Hinds, Arianne T.; Syed, Yasser; Agyo, Zineb; Vieron, Jerome; Thiesse, Jean-Marc

    2013-09-01

    High Efficiency Video Coding is the latest in the series of video coding standards developed either by MPEG, or VCEG, or jointly through a collaboration of the two committees. The first version of HEVC was completed in January 2013, but was developed without specific requirements for the compression of interlace video content. Rather, the requirements for the initial version of HEVC targeted the reduction, by 50%, of the bitrate required to delivery progressive video signals at the same, or nearly the same, visual quality as achieved by current state-of-the-art video codecs. Despite the lack of formal requirements for the support of interlace scanned content, this first version of HEVC nevertheless supports interlace video formats but achieves this support in a nominal manner, without the use of specific coding tools. Interlace formats, however, continue to be the primary format used by broadcasters for the capture and delivery of video being most recently used exclusively to capture and broadcast the 2012 Summer Olympics for the entire world. This paper explores the continued importance and relevance of interlace formats for next generation video coding standards, including HEVC. The in-progress experiments and results of a formal study of HEVC for the coding of interlace content are presented.

  3. Orbital maneuvering vehicle teleoperation and video data compression

    NASA Technical Reports Server (NTRS)

    Jones, Steve

    1989-01-01

    The Orbital Maneuvering Vehicle (OMV) and concepts of teleoperation and video data compression as applied to OMV design and operation are described. The OMV provides spacecraft delivery, retrieval, reboost, deboost and viewing services, with ground-control or Space Station operation, through autonomous navigation and pilot controlled maneuvers. Communications systems are comprised of S-band RF command, telemetry, and compressed video data links through the TDRSS and GSTDN networks. The control console video monitors display a monochrome image at an update rate of five frames per second. Depending upon the mode of operation selected by the pilot, the video resolution is either 255 x 244 pixels, or 510 x 244 pixels. Since practically all video image redundancy is removed by the compression process, the video reconstruction is particularly sensitive to data transmission bit errors. Concatenated Reed-Solomon and convolution coding are used with helical data interleaving for error detection and correction, and an error-containment process minimizes the propagation of error effects throughout the video image. Video sub-frame replacement is used, in the case of a non-correctable error or error burst, to minimize the visual impact to the pilot.

  4. VBR transcoding architecture for video streaming

    NASA Astrophysics Data System (ADS)

    Yu, Yue; Chen, Chang W.

    2000-12-01

    The delivery of high quality video program existing in a video server through heterogeneous networks is a fast emerging service and has attracted much attention recently. Transcoding technique that converts a pre-encoded video sequence from a high bitrate to a low bitrate is a key component in such a system. Based on our previously proposed VBR coding for fixed storage application, we propose in this paper a VBR transcoding architecture to accomplish video streaming over heterogeneous networks. First, we assume that the pre-encoded bitstream is generated using the proposed VBR encoding scheme at a relatively high bitrate. In addition, the relationship between the amount of bits for each frame and all possible quantization factors, i.e. rate (R) and quantization (Q) function, is also generated at the video server. Once the users provide their access requirements, the video server will optimally compute the appropriate quantization factors for each frame according to the constraints of desired bitrate and finite buffer size. Then the compressed bitstream will be decoded to DCT domain and requantized with the updated quantization factors. Because the generation of R-Q function is based on the original frame and is not related to the quantization factors, it is possible to reuse the motion information embedded in the compressed bitstream and implement the transcoding in the DCT domain. The computational expense required by this proposed transcoder is much lower than those required by schemes which decode the compressed bitstream to the pixel domain and transcode to the desired bitrate using a complete encoder. Furthermore, this VBR transcoding scheme is able to generate a CBR-like output while retaining all advantages of VBR encoding. This facilitates the delivery of VBR bitstream through various existing CBR channels. Experimental results demonstrate that our proposed VBR transcoding not only is capable of achieving higher mean PSNR and more consistent decoded visual

  5. Correlation structure analysis for distributed video compression over wireless video sensor networks

    NASA Astrophysics Data System (ADS)

    He, Zhihai; Chen, Xi

    2006-01-01

    From the information-theoretic perspective, as stated by the Wyner-Ziv theorem, the distributed source encoder doesn't need any knowledge about its side information in achieving the R-D performance limit. However, from the system design and performance analysis perspective, correlation modeling plays an important role in analysis, control, and optimization of the R-D behavior of the Wyner-Ziv video coding In this work, we observe that videos captured from a wireless video sensor network (WVSN) are uniquely correlated under the multi-view geometry. We propose to utilize this computer vision principal, as well as other existing information, which is already available or can be easily obtained from the encoder, to estimate the source correlation structure. The source correlation determines the R-D behavior of the Wyner-Ziv encoder, and provide useful information for rate control and performance optimization of the Wyner-Ziv encoder.

  6. On the Grammar of Code-Switching.

    ERIC Educational Resources Information Center

    Bhatt, Rakesh M.

    1996-01-01

    Explores an Optimality-Theoretic approach to account for observed cross-linguistic patterns of code switching that assumes that code switching strives for well-formedness. Optimization of well-formedness in code switching is shown to follow from (violable) ranked constraints. An argument is advanced that code-switching patterns emerge from…

  7. Video recording technology and its prospects

    NASA Astrophysics Data System (ADS)

    Oshima, Hideo

    1994-06-01

    The progress of broadcasting digitization technologies has produced digital VTRs for field use which are quickly replacing the conventional analog versions. In parallel with these developments, advanced high-density recording and image-data compression technologies have created the possibility for home VTRs to be digitized as well so that they may even be able to record/play back Hi-Vision programs. This paper discusses the current status and future prospects of video recording technology centering on digital VTRs.

  8. Deriving video content type from HEVC bitstream semantics

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio R.

    2014-05-01

    As network service providers seek to improve customer satisfaction and retention levels, they are increasingly moving from traditional quality of service (QoS) driven delivery models to customer-centred quality of experience (QoE) delivery models. QoS models only consider metrics derived from the network however, QoE models also consider metrics derived from within the video sequence itself. Various spatial and temporal characteristics of a video sequence have been proposed, both individually and in combination, to derive methods of classifying video content either on a continuous scale or as a set of discrete classes. QoE models can be divided into three broad categories, full reference, reduced reference and no-reference models. Due to the need to have the original video available at the client for comparison, full reference metrics are of limited practical value in adaptive real-time video applications. Reduced reference metrics often require metadata to be transmitted with the bitstream, while no-reference metrics typically operate in the decompressed domain at the client side and require significant processing to extract spatial and temporal features. This paper proposes a heuristic, no-reference approach to video content classification which is specific to HEVC encoded bitstreams. The HEVC encoder already makes use of spatial characteristics to determine partitioning of coding units and temporal characteristics to determine the splitting of prediction units. We derive a function which approximates the spatio-temporal characteristics of the video sequence by using the weighted averages of the depth at which the coding unit quadtree is split and the prediction mode decision made by the encoder to estimate spatial and temporal characteristics respectively. Since the video content type of a sequence is determined by using high level information parsed from the video stream, spatio-temporal characteristics are identified without the need for full decoding and can

  9. Video Game Device Haptic Interface for Robotic Arc Welding

    SciTech Connect

    Corrie I. Nichol; Milos Manic

    2009-05-01

    Recent advances in technology for video games have made a broad array of haptic feedback devices available at low cost. This paper presents a bi-manual haptic system to enable an operator to weld remotely using the a commercially available haptic feedback video game device for the user interface. The system showed good performance in initial tests, demonstrating the utility of low cost input devices for remote haptic operations.

  10. Uplink Coding

    NASA Technical Reports Server (NTRS)

    Pollara, Fabrizio; Hamkins, Jon; Dolinar, Sam; Andrews, Ken; Divsalar, Dariush

    2006-01-01

    This viewgraph presentation reviews uplink coding. The purpose and goals of the briefing are (1) Show a plan for using uplink coding and describe benefits (2) Define possible solutions and their applicability to different types of uplink, including emergency uplink (3) Concur with our conclusions so we can embark on a plan to use proposed uplink system (4) Identify the need for the development of appropriate technology and infusion in the DSN (5) Gain advocacy to implement uplink coding in flight projects Action Item EMB04-1-14 -- Show a plan for using uplink coding, including showing where it is useful or not (include discussion of emergency uplink coding).

  11. Video teleconferencing review for support of high energy physics activities

    SciTech Connect

    Chartrand, G.

    1991-03-13

    Although video teleconferencing systems have been available for many years, their cost has been considered prohibitive for both the actual teleconferencing equipment and for the communications circuits associated with their use. New technology has significantly reduced these costs making video teleconferencing a practical means of communication for the first time, and creating a new way in which HEP personnel can work and interact electronically. The recent rapid evolution in video teleconferencing technology has been driven primarily by advances in microprocessor and DSP components. Advanced systems available today provide significant performance improvements in the frames delivered per-unit-per-time'' over systems designed just a few years ago. This improved performance and reduced costs for communications bandwidth are the most important factors creating the potential for widespread use of video teleconferencing within the HEP community. The HEP community has made extensive use of electronic communication in the form of computer networking for over a decade. Within the last year, a limited video teleconferencing capability was established in the form of a pilot project linking LNL, FNAL, and SSCL. The pilot project demonstrated that video teleconferencing can, in certain circumstances, be a viable alternative to travel. Due to the growing size and dispersion of experimental collaborations, video teleconferencing will almost certainly become a necessity in the conduct and management of large projects and programs in HEP.

  12. Adaptive MPEG-2 video data hiding scheme

    NASA Astrophysics Data System (ADS)

    Sarkar, Anindya; Madhow, Upamanyu; Chandrasekaran, Shivkumar; Manjunath, Bangalore S.

    2007-02-01

    We have investigated adaptive mechanisms for high-volume transform-domain data hiding in MPEG-2 video which can be tuned to sustain varying levels of compression attacks. The data is hidden in the uncompressed domain by scalar quantization index modulation (QIM) on a selected set of low-frequency discrete cosine transform (DCT) coefficients. We propose an adaptive hiding scheme where the embedding rate is varied according to the type of frame and the reference quantization parameter (decided according to MPEG-2 rate control scheme) for that frame. For a 1.5 Mbps video and a frame-rate of 25 frames/sec, we are able to embed almost 7500 bits/sec. Also, the adaptive scheme hides 20% more data and incurs significantly less frame errors (frames for which the embedded data is not fully recovered) than the non-adaptive scheme. Our embedding scheme incurs insertions and deletions at the decoder which may cause de-synchronization and decoding failure. This problem is solved by the use of powerful turbo-like codes and erasures at the encoder. The channel capacity estimate gives an idea of the minimum code redundancy factor required for reliable decoding of hidden data transmitted through the channel. To that end, we have modeled the MPEG-2 video channel using the transition probability matrices given by the data hiding procedure, using which we compute the (hiding scheme dependent) channel capacity.

  13. Image and Video Compression with VLSI Neural Networks

    NASA Technical Reports Server (NTRS)

    Fang, W.; Sheu, B.

    1993-01-01

    An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.

  14. Picture data compression coder using subband/transform coding with a Lempel-Ziv-based coder

    NASA Technical Reports Server (NTRS)

    Glover, Daniel R. (Inventor)

    1995-01-01

    Digital data coders/decoders are used extensively in video transmission. A digitally encoded video signal is separated into subbands. Separating the video into subbands allows transmission at low data rates. Once the data is separated into these subbands it can be coded and then decoded by statistical coders such as the Lempel-Ziv based coder.

  15. Holistic video detection

    NASA Astrophysics Data System (ADS)

    Gong, Shaogang

    2007-10-01

    There are large amount of CCTV cameras collecting colossal amounts of video data about people and their behaviour. However, this overwhelming amount of data also causes overflow of information if their content is not analysed in a wider context to provide selective focus and automated alert triggering. To date, truly semantics based video analytic systems do not exist. There is an urgent need for the development of automated systems to monitor holistically the behaviours of people, vehicles and the whereabout of objects of interest in public space. In this work, we highlight the challenges and recent progress towards building computer vision systems for holistic video detection in a distributed network of multiple cameras based on object localisation, categorisation and tagging from different views in highly cluttered scenes.

  16. Brains on video games.

    PubMed

    Bavelier, Daphne; Green, C Shawn; Han, Doug Hyun; Renshaw, Perry F; Merzenich, Michael M; Gentile, Douglas A

    2011-12-01

    The popular press is replete with stories about the effects of video and computer games on the brain. Sensationalist headlines claiming that video games 'damage the brain' or 'boost brain power' do not do justice to the complexities and limitations of the studies involved, and create a confusing overall picture about the effects of gaming on the brain. Here, six experts in the field shed light on our current understanding of the positive and negative ways in which playing video games can affect cognition and behaviour, and explain how this knowledge can be harnessed for educational and rehabilitation purposes. As research in this area is still in its early days, the contributors of this Viewpoint also discuss several issues and challenges that should be addressed to move the field forward. PMID:22095065

  17. Brains on video games

    PubMed Central

    Bavelier, Daphne; Green, C. Shawn; Han, Doug Hyun; Renshaw, Perry F.; Merzenich, Michael M.; Gentile, Douglas A.

    2015-01-01

    The popular press is replete with stories about the effects of video and computer games on the brain. Sensationalist headlines claiming that video games ‘damage the brain’ or ‘boost brain power’ do not do justice to the complexities and limitations of the studies involved, and create a confusing overall picture about the effects of gaming on the brain. Here, six experts in the field shed light on our current understanding of the positive and negative ways in which playing video games can affect cognition and behaviour, and explain how this knowledge can be harnessed for educational and rehabilitation purposes. As research in this area is still in its early days, the contributors of this Viewpoint also discuss several issues and challenges that should be addressed to move the field forward. PMID:22095065

  18. NASA Video Catalog

    NASA Technical Reports Server (NTRS)

    2006-01-01

    This issue of the NASA Video Catalog cites video productions listed in the NASA STI database. The videos listed have been developed by the NASA centers, covering Shuttle mission press conferences; fly-bys of planets; aircraft design, testing and performance; environmental pollution; lunar and planetary exploration; and many other categories related to manned and unmanned space exploration. Each entry in the publication consists of a standard bibliographic citation accompanied by an abstract. The Table of Contents shows how the entries are arranged by divisions and categories according to the NASA Scope and Subject Category Guide. For users with specific information, a Title Index is available. A Subject Term Index, based on the NASA Thesaurus, is also included. Guidelines for usage of NASA audio/visual material, ordering information, and order forms are also available.

  19. A neuromorphic system for video object recognition

    PubMed Central

    Khosla, Deepak; Chen, Yang; Kim, Kyungnam

    2014-01-01

    Automated video object recognition is a topic of emerging importance in both defense and civilian applications. This work describes an accurate and low-power neuromorphic architecture and system for real-time automated video object recognition. Our system, Neuormorphic Visual Understanding of Scenes (NEOVUS), is inspired by computational neuroscience models of feed-forward object detection and classification pipelines for processing visual data. The NEOVUS architecture is inspired by the ventral (what) and dorsal (where) streams of the mammalian visual pathway and integrates retinal processing, object detection based on form and motion modeling, and object classification based on convolutional neural networks. The object recognition performance and energy use of the NEOVUS was evaluated by the Defense Advanced Research Projects Agency (DARPA) under the Neovision2 program using three urban area video datasets collected from a mix of stationary and moving platforms. These datasets are challenging and include a large number of objects of different types in cluttered scenes, with varying illumination and occlusion conditions. In a systematic evaluation of five different teams by DARPA on these datasets, the NEOVUS demonstrated the best performance with high object recognition accuracy and the lowest energy consumption. Its energy use was three orders of magnitude lower than two independent state of the art baseline computer vision systems. The dynamic power requirement for the complete system mapped to commercial off-the-shelf (COTS) hardware that includes a 5.6 Megapixel color camera processed by object detection and classification algorithms at 30 frames per second was measured at 21.7 Watts (W), for an effective energy consumption of 5.45 nanoJoules (nJ) per bit of incoming video. These unprecedented results show that the NEOVUS has the potential to revolutionize automated video object recognition toward enabling practical low-power and mobile video processing

  20. Informative-frame filtering in endoscopy videos

    NASA Astrophysics Data System (ADS)

    An, Yong Hwan; Hwang, Sae; Oh, JungHwan; Lee, JeongKyu; Tavanapong, Wallapak; de Groen, Piet C.; Wong, Johnny

    2005-04-01

    Advances in video technology are being incorporated into today"s healthcare practice. For example, colonoscopy is an important screening tool for colorectal cancer. Colonoscopy allows for the inspection of the entire colon and provides the ability to perform a number of therapeutic operations during a single procedure. During a colonoscopic procedure, a tiny video camera at the tip of the endoscope generates a video signal of the internal mucosa of the colon. The video data are displayed on a monitor for real-time analysis by the endoscopist. Other endoscopic procedures include upper gastrointestinal endoscopy, enteroscopy, bronchoscopy, cystoscopy, and laparoscopy. However, a significant number of out-of-focus frames are included in this type of videos since current endoscopes are equipped with a single, wide-angle lens that cannot be focused. The out-of-focus frames do not hold any useful information. To reduce the burdens of the further processes such as computer-aided image processing or human expert"s examinations, these frames need to be removed. We call an out-of-focus frame as non-informative frame and an in-focus frame as informative frame. We propose a new technique to classify the video frames into two classes, informative and non-informative frames using a combination of Discrete Fourier Transform (DFT), Texture Analysis, and K-Means Clustering. The proposed technique can evaluate the frames without any reference image, and does not need any predefined threshold value. Our experimental studies indicate that it achieves over 96% of four different performance metrics (i.e. precision, sensitivity, specificity, and accuracy).