Science.gov

Sample records for 3d video coding

  1. Object-adaptive depth compensated inter prediction for depth video coding in 3D video system

    NASA Astrophysics Data System (ADS)

    Kang, Min-Koo; Lee, Jaejoon; Lim, Ilsoon; Ho, Yo-Sung

    2011-01-01

    Nowadays, the 3D video system using the MVD (multi-view video plus depth) data format is being actively studied. The system has many advantages with respect to virtual view synthesis such as an auto-stereoscopic functionality, but compression of huge input data remains a problem. Therefore, efficient 3D data compression is extremely important in the system, and problems of low temporal consistency and viewpoint correlation should be resolved for efficient depth video coding. In this paper, we propose an object-adaptive depth compensated inter prediction method to resolve the problems where object-adaptive mean-depth difference between a current block, to be coded, and a reference block are compensated during inter prediction. In addition, unique properties of depth video are exploited to reduce side information required for signaling decoder to conduct the same process. To evaluate the coding performance, we have implemented the proposed method into MVC (multiview video coding) reference software, JMVC 8.2. Experimental results have demonstrated that our proposed method is especially efficient for depth videos estimated by DERS (depth estimation reference software) discussed in the MPEG 3DV coding group. The coding gain was up to 11.69% bit-saving, and it was even increased when we evaluated it on synthesized views of virtual viewpoints.

  2. 3D video coding: an overview of present and upcoming standards

    NASA Astrophysics Data System (ADS)

    Merkle, Philipp; Müller, Karsten; Wiegand, Thomas

    2010-07-01

    An overview of existing and upcoming 3D video coding standards is given. Various different 3D video formats are available, each with individual pros and cons. The 3D video formats can be separated into two classes: video-only formats (such as stereo and multiview video) and depth-enhanced formats (such as video plus depth and multiview video plus depth). Since all these formats exist of at least two video sequences and possibly additional depth data, efficient compression is essential for the success of 3D video applications and technologies. For the video-only formats the H.264 family of coding standards already provides efficient and widely established compression algorithms: H.264/AVC simulcast, H.264/AVC stereo SEI message, and H.264/MVC. For the depth-enhanced formats standardized coding algorithms are currently being developed. New and specially adapted coding approaches are necessary, as the depth or disparity information included in these formats has significantly different characteristics than video and is not displayed directly, but used for rendering. Motivated by evolving market needs, MPEG has started an activity to develop a generic 3D video standard within the 3DVC ad-hoc group. Key features of the standard are efficient and flexible compression of depth-enhanced 3D video representations and decoupling of content creation and display requirements.

  3. Depth-based coding of MVD data for 3D video extension of H.264/AVC

    NASA Astrophysics Data System (ADS)

    Rusanovskyy, Dmytro; Hannuksela, Miska M.; Su, Wenyi

    2013-06-01

    This paper describes a novel approach of using depth information for advanced coding of associated video data in Multiview Video plus Depth (MVD)-based 3D video systems. As a possible implementation of this conception, we describe two coding tools that have been developed for H.264/AVC based 3D Video Codec as response to Moving Picture Experts Group (MPEG) Call for Proposals (CfP). These tools are Depth-based Motion Vector Prediction (DMVP) and Backward View Synthesis Prediction (BVSP). Simulation results conducted under JCT-3V/MPEG 3DV Common Test Conditions show, that proposed in this paper tools reduce bit rate of coded video data by 15% of average delta bit rate reduction, which results in 13% of bit rate savings on total for the MVD data over the state-of-the-art MVC+D coding. Moreover, presented in this paper conception of depth-based coding of video has been further developed by MPEG 3DV and JCT-3V and this work resulted in even higher compression efficiency, bringing about 20% of delta bit rate reduction on total for coded MVD data over the reference MVC+D coding. Considering significant gains, proposed in this paper coding approach can be beneficial for development of new 3D video coding standards. [Figure not available: see fulltext.

  4. Impact of packet losses in scalable 3D holoscopic video coding

    NASA Astrophysics Data System (ADS)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2014-05-01

    Holoscopic imaging became a prospective glassless 3D technology to provide more natural 3D viewing experiences to the end user. Additionally, holoscopic systems also allow new post-production degrees of freedom, such as controlling the plane of focus or the viewing angle presented to the user. However, to successfully introduce this technology into the consumer market, a display scalable coding approach is essential to achieve backward compatibility with legacy 2D and 3D displays. Moreover, to effectively transmit 3D holoscopic content over error-prone networks, e.g., wireless networks or the Internet, error resilience techniques are required to mitigate the impact of data impairments in the user quality perception. Therefore, it is essential to deeply understand the impact of packet losses in terms of decoding video quality for the specific case of 3D holoscopic content, notably when a scalable approach is used. In this context, this paper studies the impact of packet losses when using a three-layer display scalable 3D holoscopic video coding architecture previously proposed, where each layer represents a different level of display scalability (i.e., L0 - 2D, L1 - stereo or multiview, and L2 - full 3D holoscopic). For this, a simple error concealment algorithm is used, which makes use of inter-layer redundancy between multiview and 3D holoscopic content and the inherent correlation of the 3D holoscopic content to estimate lost data. Furthermore, a study of the influence of 2D views generation parameters used in lower layers on the performance of the used error concealment algorithm is also presented.

  5. The future of 3D and video coding in mobile and the internet

    NASA Astrophysics Data System (ADS)

    Bivolarski, Lazar

    2013-09-01

    The current Internet success has already changed our social and economic world and is still continuing to revolutionize the information exchange. The exponential increase of amount and types of data that is currently exchanged on the Internet represents significant challenge for the design of future architectures and solutions. This paper reviews the current status and trends in the design of solutions and research activities in the future Internet from point of view of managing the growth of bandwidth requirements and complexity of the multimedia that is being created and shared. Outlines the challenges that are present before the video coding and approaches to the design of standardized media formats and protocols while considering the expected convergence of multimedia formats and exchange interfaces. The rapid growth of connected mobile devices adds to the current and the future challenges in combination with the expected, in near future, arrival of multitude of connected devices. The new Internet technologies connecting the Internet of Things with wireless visual sensor networks and 3D virtual worlds requires conceptually new approaches of media content handling from acquisition to presentation in the 3D Media Internet. Accounting for the entire transmission system properties and enabling adaptation in real-time to context and content throughout the media proceeding path will be paramount in enabling the new media architectures as well as the new applications and services. The common video coding formats will need to be conceptually redesigned to allow for the implementation of the necessary 3D Media Internet features.

  6. Depth map coding using residual segmentation for 3D video system

    NASA Astrophysics Data System (ADS)

    Lee, Cheon; Ho, Yo-Sung

    2013-06-01

    Advanced 3D video systems employ multi-view video-plus-depth data to support the free-viewpoint navigation and comfortable 3D viewing; thus efficient depth map coding becomes an important issue. Unlike the color image, the depth map has a property that depth values of the inner part of an object are monotonic, but those of object boundaries change abruptly. Therefore, residual data generated by prediction errors around object boundaries consume many bits in depth map coding. Representing them with segment data can be better than the use of the conventional transformation around the boundary regions. In this paper, we propose an efficient depth map coding method using a residual segmentation instead of using transformation. The proposed residual segmentation divides residual data into two regions with a segment map and two mean values. If the encoder selects the proposed method in terms of rates, two quantized mean values and an index of the segment map are transmitted. Simulation results show significant gains of up to 10 dB compared to the state-of-the-art coders, such as JPEG2000 and H.264/AVC. [Figure not available: see fulltext.

  7. Depth-based representations: Which coding format for 3D video broadcast applications?

    NASA Astrophysics Data System (ADS)

    Kerbiriou, Paul; Boisson, Guillaume; Sidibé, Korian; Huynh-Thu, Quan

    2011-03-01

    3D Video (3DV) delivery standardization is currently ongoing in MPEG. Now time is to choose 3DV data representation format. What is at stake is the final quality for end-users, i.e. synthesized views' visual quality. We focus on two major rival depth-based formats, namely Multiview Video plus Depth (MVD) and Layered Depth Video (LDV). MVD can be considered as the basic depth-based 3DV format, generated by disparity estimation from multiview sequences. LDV is more sophisticated, with the compaction of multiview data into color- and depth-occlusions layers. We compare final views quality using MVD2 and LDV (both containing two color channels plus two depth components) coded with MVC at various compression ratios. Depending on the format, the appropriate synthesis process is performed to generate final stereoscopic pairs. Comparisons are provided in terms of SSIM and PSNR with respect to original views and to synthesized references (obtained without compression). Eventually, LDV outperforms significantly MVD when using state-of-the-art reference synthesis algorithms. Occlusions management before encoding is advantageous in comparison with handling redundant signals at decoder side. Besides, we observe that depth quantization does not induce much loss on the final view quality until a significant degradation level. Improvements in disparity estimation and view synthesis algorithms are therefore still expected during the remaining standardization steps.

  8. A new structure of 3D dual-tree discrete wavelet transforms and applications to video denoising and coding

    NASA Astrophysics Data System (ADS)

    Shi, Fei; Wang, Beibei; Selesnick, Ivan W.; Wang, Yao

    2006-01-01

    This paper introduces an anisotropic decomposition structure of a recently introduced 3-D dual-tree discrete wavelet transform (DDWT), and explores the applications for video denoising and coding. The 3-D DDWT is an attractive video representation because it isolates motion along different directions in separate subbands, and thus leads to sparse video decompositions. Our previous investigation shows that the 3-D DDWT, compared to the standard discrete wavelet transform (DWT), complies better with the statistical models based on sparse presumptions, and gives better visual and numerical results when used for statistical denoising algorithms. Our research on video compression also shows that even with 4:1 redundancy, the 3-D DDWT needs fewer coefficients to achieve the same coding quality (in PSNR) by applying the iterative projection-based noise shaping scheme proposed by Kingsbury. The proposed anisotropic DDWT extends the superiority of isotropic DDWT with more directional subbands without adding to the redundancy. Unlike the original 3-D DDWT which applies dyadic decomposition along all three directions and produces isotropic frequency spacing, it has a non-uniform tiling of the frequency space. By applying this structure, we can improve the denoising results, and the number of significant coefficients can be reduced further, which is beneficial for video coding.

  9. Using self-similarity compensation for improving inter-layer prediction in scalable 3D holoscopic video coding

    NASA Astrophysics Data System (ADS)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2013-09-01

    Holoscopic imaging, also known as integral imaging, has been recently attracting the attention of the research community, as a promising glassless 3D technology due to its ability to create a more realistic depth illusion than the current stereoscopic or multiview solutions. However, in order to gradually introduce this technology into the consumer market and to efficiently deliver 3D holoscopic content to end-users, backward compatibility with legacy displays is essential. Consequently, to enable 3D holoscopic content to be delivered and presented on legacy displays, a display scalable 3D holoscopic coding approach is required. Hence, this paper presents a display scalable architecture for 3D holoscopic video coding with a three-layer approach, where each layer represents a different level of display scalability: Layer 0 - a single 2D view; Layer 1 - 3D stereo or multiview; and Layer 2 - the full 3D holoscopic content. In this context, a prediction method is proposed, which combines inter-layer prediction, aiming to exploit the existing redundancy between the multiview and the 3D holoscopic layers, with self-similarity compensated prediction (previously proposed by the authors for non-scalable 3D holoscopic video coding), aiming to exploit the spatial redundancy inherent to the 3D holoscopic enhancement layer. Experimental results show that the proposed combined prediction can improve significantly the rate-distortion performance of scalable 3D holoscopic video coding with respect to the authors' previously proposed solutions, where only inter-layer or only self-similarity prediction is used.

  10. A parallel 3-D discrete wavelet transform architecture using pipelined lifting scheme approach for video coding

    NASA Astrophysics Data System (ADS)

    Hegde, Ganapathi; Vaya, Pukhraj

    2013-10-01

    This article presents a parallel architecture for 3-D discrete wavelet transform (3-DDWT). The proposed design is based on the 1-D pipelined lifting scheme. The architecture is fully scalable beyond the present coherent Daubechies filter bank (9, 7). This 3-DDWT architecture has advantages such as no group of pictures restriction and reduced memory referencing. It offers low power consumption, low latency and high throughput. The computing technique is based on the concept that lifting scheme minimises the storage requirement. The application specific integrated circuit implementation of the proposed architecture is done by synthesising it using 65 nm Taiwan Semiconductor Manufacturing Company standard cell library. It offers a speed of 486 MHz with a power consumption of 2.56 mW. This architecture is suitable for real-time video compression even with large frame dimensions.

  11. Topology dictionary for 3D video understanding.

    PubMed

    Tung, Tony; Matsuyama, Takashi

    2012-08-01

    This paper presents a novel approach that achieves 3D video understanding. 3D video consists of a stream of 3D models of subjects in motion. The acquisition of long sequences requires large storage space (2 GB for 1 min). Moreover, it is tedious to browse data sets and extract meaningful information. We propose the topology dictionary to encode and describe 3D video content. The model consists of a topology-based shape descriptor dictionary which can be generated from either extracted patterns or training sequences. The model relies on 1) topology description and classification using Reeb graphs, and 2) a Markov motion graph to represent topology change states. We show that the use of Reeb graphs as the high-level topology descriptor is relevant. It allows the dictionary to automatically model complex sequences, whereas other strategies would require prior knowledge on the shape and topology of the captured subjects. Our approach serves to encode 3D video sequences, and can be applied for content-based description and summarization of 3D video sequences. Furthermore, topology class labeling during a learning process enables the system to perform content-based event recognition. Experiments were carried out on various 3D videos. We showcase an application for 3D video progressive summarization using the topology dictionary.

  12. TACO3D. 3-D Finite Element Heat Transfer Code

    SciTech Connect

    Mason, W.E.

    1992-03-04

    TACO3D is a three-dimensional, finite-element program for heat transfer analysis. An extension of the two-dimensional TACO program, it can perform linear and nonlinear analyses and can be used to solve either transient or steady-state problems. The program accepts time-dependent or temperature-dependent material properties, and materials may be isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions and loadings are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additional specialized features treat enclosure radiation, bulk nodes, and master/slave internal surface conditions (e.g., contact resistance). Data input via a free-field format is provided. A user subprogram feature allows for any type of functional representation of any independent variable. A profile (bandwidth) minimization option is available. The code is limited to implicit time integration for transient solutions. TACO3D has no general mesh generation capability. Rows of evenly-spaced nodes and rows of sequential elements may be generated, but the program relies on separate mesh generators for complex zoning. TACO3D does not have the ability to calculate view factors internally. Graphical representation of data in the form of time history and spatial plots is provided through links to the POSTACO and GRAPE postprocessor codes.

  13. Parallel CARLOS-3D code development

    SciTech Connect

    Putnam, J.M.; Kotulski, J.D.

    1996-02-01

    CARLOS-3D is a three-dimensional scattering code which was developed under the sponsorship of the Electromagnetic Code Consortium, and is currently used by over 80 aerospace companies and government agencies. The code has been extensively validated and runs on both serial workstations and parallel super computers such as the Intel Paragon. CARLOS-3D is a three-dimensional surface integral equation scattering code based on a Galerkin method of moments formulation employing Rao- Wilton-Glisson roof-top basis for triangular faceted surfaces. Fully arbitrary 3D geometries composed of multiple conducting and homogeneous bulk dielectric materials can be modeled. This presentation describes some of the extensions to the CARLOS-3D code, and how the operator structure of the code facilitated these improvements. Body of revolution (BOR) and two-dimensional geometries were incorporated by simply including new input routines, and the appropriate Galerkin matrix operator routines. Some additional modifications were required in the combined field integral equation matrix generation routine due to the symmetric nature of the BOR and 2D operators. Quadrilateral patched surfaces with linear roof-top basis functions were also implemented in the same manner. Quadrilateral facets and triangular facets can be used in combination to more efficiently model geometries with both large smooth surfaces and surfaces with fine detail such as gaps and cracks. Since the parallel implementation in CARLOS-3D is at high level, these changes were independent of the computer platform being used. This approach minimizes code maintenance, while providing capabilities with little additional effort. Results are presented showing the performance and accuracy of the code for some large scattering problems. Comparisons between triangular faceted and quadrilateral faceted geometry representations will be shown for some complex scatterers.

  14. Adapting hierarchical bidirectional inter prediction on a GPU-based platform for 2D and 3D H.264 video coding

    NASA Astrophysics Data System (ADS)

    Rodríguez-Sánchez, Rafael; Martínez, José Luis; Cock, Jan De; Fernández-Escribano, Gerardo; Pieters, Bart; Sánchez, José L.; Claver, José M.; de Walle, Rik Van

    2013-12-01

    The H.264/AVC video coding standard introduces some improved tools in order to increase compression efficiency. Moreover, the multi-view extension of H.264/AVC, called H.264/MVC, adopts many of them. Among the new features, variable block-size motion estimation is one which contributes to high coding efficiency. Furthermore, it defines a different prediction structure that includes hierarchical bidirectional pictures, outperforming traditional Group of Pictures patterns in both scenarios: single-view and multi-view. However, these video coding techniques have high computational complexity. Several techniques have been proposed in the literature over the last few years which are aimed at accelerating the inter prediction process, but there are no works focusing on bidirectional prediction or hierarchical prediction. In this article, with the emergence of many-core processors or accelerators, a step forward is taken towards an implementation of an H.264/AVC and H.264/MVC inter prediction algorithm on a graphics processing unit. The results show a negligible rate distortion drop with a time reduction of up to 98% for the complete H.264/AVC encoder.

  15. A fast mode decision algorithm for multiview auto-stereoscopic 3D video coding based on mode and disparity statistic analysis

    NASA Astrophysics Data System (ADS)

    Ding, Cong; Sang, Xinzhu; Zhao, Tianqi; Yan, Binbin; Leng, Junmin; Yuan, Jinhui; Zhang, Ying

    2012-11-01

    Multiview video coding (MVC) is essential for applications of the auto-stereoscopic three-dimensional displays. However, the computational complexity of MVC encoders is tremendously huge. Fast algorithms are very desirable for the practical applications of MVC. Based on joint early termination , the selection of inter-view prediction and the optimization of the process of Inter8×8 modes by comparison, a fast macroblock(MB) mode selection algorithm is presented. Comparing with the full mode decision in MVC, the experimental results show that the proposed algorithm can reduce up to 78.13% on average and maximum 90.21% encoding time with a little increase in bit rates and loss in PSNR.

  16. Alignment of continuous video onto 3D point clouds.

    PubMed

    Zhao, Wenyi; Nister, David; Hsu, Steve

    2005-08-01

    We propose a general framework for aligning continuous (oblique) video onto 3D sensor data. We align a point cloud computed from the video onto the point cloud directly obtained from a 3D sensor. This is in contrast to existing techniques where the 2D images are aligned to a 3D model derived from the 3D sensor data. Using point clouds enables the alignment for scenes full of objects that are difficult to model; for example, trees. To compute 3D point clouds from video, motion stereo is used along with a state-of-the-art algorithm for camera pose estimation. Our experiments with real data demonstrate the advantages of the proposed registration algorithm for texturing models in large-scale semiurban environments. The capability to align video before a 3D model is built from the 3D sensor data offers new practical opportunities for 3D modeling. We introduce a novel modeling-through-registration approach that fuses 3D information from both the 3D sensor and the video. Initial experiments with real data illustrate the potential of the proposed approach.

  17. Development of 3D video and 3D data services for T-DMB

    NASA Astrophysics Data System (ADS)

    Yun, Kugjin; Lee, Hyun; Hur, Namho; Kim, Jinwoong

    2008-02-01

    In this paper, we present motivation, system concept, and implementation details of stereoscopic 3D visual services on T-DMB. We have developed two types of 3D visual service : one is '3D video service', which provides 3D depth feeling for a video program by sending left and right view video streams, and the other is '3D data service', which provides presentation of 3D objects overlaid on top of 2D video program. We have developed several highly efficient and sophisticated transmission schemes for the delivery of 3D visual data in order to meet the system requirements such as (1) minimization of bitrate overhead to comply with the strict constraint of T-DMB channel bandwidth; (2) backward and forward compatibility with existing T-DMB; (3) maximize the eye-catching effect of 3D visual representation while reducing eye fatigue. We found that, in contrast to conventional way of providing a stereo version of a program as a whole, the proposed scheme can lead to variety of efficient and effective 3D visual services which can be adapted to many business models.

  18. Visual Semantic Based 3D Video Retrieval System Using HDFS

    PubMed Central

    Kumar, C.Ranjith; Suguna, S.

    2016-01-01

    This paper brings out a neoteric frame of reference for visual semantic based 3d video search and retrieval applications. Newfangled 3D retrieval application spotlight on shape analysis like object matching, classification and retrieval not only sticking up entirely with video retrieval. In this ambit, we delve into 3D-CBVR (Content Based Video Retrieval) concept for the first time. For this purpose, we intent to hitch on BOVW and Mapreduce in 3D framework. Instead of conventional shape based local descriptors, we tried to coalesce shape, color and texture for feature extraction. For this purpose, we have used combination of geometric & topological features for shape and 3D co-occurrence matrix for color and texture. After thriving extraction of local descriptors, TB-PCT (Threshold Based- Predictive Clustering Tree) algorithm is used to generate visual codebook and histogram is produced. Further, matching is performed using soft weighting scheme with L2 distance function. As a final step, retrieved results are ranked according to the Index value and acknowledged to the user as a feedback .In order to handle prodigious amount of data and Efficacious retrieval, we have incorporated HDFS in our Intellection. Using 3D video dataset, we future the performance of our proposed system which can pan out that the proposed work gives meticulous result and also reduce the time intricacy. PMID:28003793

  19. Visual Semantic Based 3D Video Retrieval System Using HDFS.

    PubMed

    Kumar, C Ranjith; Suguna, S

    2016-08-01

    This paper brings out a neoteric frame of reference for visual semantic based 3d video search and retrieval applications. Newfangled 3D retrieval application spotlight on shape analysis like object matching, classification and retrieval not only sticking up entirely with video retrieval. In this ambit, we delve into 3D-CBVR (Content Based Video Retrieval) concept for the first time. For this purpose, we intent to hitch on BOVW and Mapreduce in 3D framework. Instead of conventional shape based local descriptors, we tried to coalesce shape, color and texture for feature extraction. For this purpose, we have used combination of geometric & topological features for shape and 3D co-occurrence matrix for color and texture. After thriving extraction of local descriptors, TB-PCT (Threshold Based- Predictive Clustering Tree) algorithm is used to generate visual codebook and histogram is produced. Further, matching is performed using soft weighting scheme with L2 distance function. As a final step, retrieved results are ranked according to the Index value and acknowledged to the user as a feedback .In order to handle prodigious amount of data and Efficacious retrieval, we have incorporated HDFS in our Intellection. Using 3D video dataset, we future the performance of our proposed system which can pan out that the proposed work gives meticulous result and also reduce the time intricacy.

  20. Stereoscopic 3D video games and their effects on engagement

    NASA Astrophysics Data System (ADS)

    Hogue, Andrew; Kapralos, Bill; Zerebecki, Chris; Tawadrous, Mina; Stanfield, Brodie; Hogue, Urszula

    2012-03-01

    With television manufacturers developing low-cost stereoscopic 3D displays, a large number of consumers will undoubtedly have access to 3D-capable televisions at home. The availability of 3D technology places the onus on content creators to develop interesting and engaging content. While the technology of stereoscopic displays and content generation are well understood, there are many questions yet to be answered surrounding its effects on the viewer. Effects of stereoscopic display on passive viewers for film are known, however video games are fundamentally different since the viewer/player is actively (rather than passively) engaged in the content. Questions of how stereoscopic viewing affects interaction mechanics have previously been studied in the context of player performance but very few have attempted to quantify the player experience to determine whether stereoscopic 3D has a positive or negative influence on their overall engagement. In this paper we present a preliminary study of the effects stereoscopic 3D have on player engagement in video games. Participants played a video game in two conditions, traditional 2D and stereoscopic 3D and their engagement was quantified using a previously validated self-reporting tool. The results suggest that S3D has a positive effect on immersion, presence, flow, and absorption.

  1. Geographic Video 3d Data Model And Retrieval

    NASA Astrophysics Data System (ADS)

    Han, Z.; Cui, C.; Kong, Y.; Wu, H.

    2014-04-01

    Geographic video includes both spatial and temporal geographic features acquired through ground-based or non-ground-based cameras. With the popularity of video capture devices such as smartphones, the volume of user-generated geographic video clips has grown significantly and the trend of this growth is quickly accelerating. Such a massive and increasing volume poses a major challenge to efficient video management and query. Most of the today's video management and query techniques are based on signal level content extraction. They are not able to fully utilize the geographic information of the videos. This paper aimed to introduce a geographic video 3D data model based on spatial information. The main idea of the model is to utilize the location, trajectory and azimuth information acquired by sensors such as GPS receivers and 3D electronic compasses in conjunction with video contents. The raw spatial information is synthesized to point, line, polygon and solid according to the camcorder parameters such as focal length and angle of view. With the video segment and video frame, we defined the three categories geometry object using the geometry model of OGC Simple Features Specification for SQL. We can query video through computing the spatial relation between query objects and three categories geometry object such as VFLocation, VSTrajectory, VSFOView and VFFovCone etc. We designed the query methods using the structured query language (SQL) in detail. The experiment indicate that the model is a multiple objective, integration, loosely coupled, flexible and extensible data model for the management of geographic stereo video.

  2. RHOCUBE: 3D density distributions modeling code

    NASA Astrophysics Data System (ADS)

    Nikutta, Robert; Agliozzo, Claudia

    2016-11-01

    RHOCUBE models 3D density distributions on a discrete Cartesian grid and their integrated 2D maps. It can be used for a range of applications, including modeling the electron number density in LBV shells and computing the emission measure. The RHOCUBE Python package provides several 3D density distributions, including a powerlaw shell, truncated Gaussian shell, constant-density torus, dual cones, and spiralling helical tubes, and can accept additional distributions. RHOCUBE provides convenient methods for shifts and rotations in 3D, and if necessary, an arbitrary number of density distributions can be combined into the same model cube and the integration ∫ dz performed through the joint density field.

  3. Edge-based intramode selection for depth-map coding in 3D-HEVC.

    PubMed

    Park, Chun-Su

    2015-01-01

    The 3D video extension of High Efficiency Video Coding (3D-HEVC) is the state-of-the-art video coding standard for the compression of the multiview video plus depth format. In the 3D-HEVC design, new depth-modeling modes (DMMs) are utilized together with the existing intraprediction modes for depth intracoding. The DMMs can provide more accurate prediction signals and thereby achieve better compression efficiency. However, testing the DMMs in the intramode decision process causes a drastic increase in the computational complexity. In this paper, we propose a fast mode decision algorithm for depth intracoding. The proposed algorithm first performs a simple edge classification in the Hadamard transform domain. Then, based on the edge classification results, the proposed algorithm selectively omits unnecessary DMMs in the mode decision process. Experimental results demonstrate that the proposed algorithm speeds up the mode decision process by up to 37.65% with negligible loss of coding efficiency.

  4. Use scenarios: mobile 3D television and video

    NASA Astrophysics Data System (ADS)

    Strohmeier, Dominik; Weitzel, Mandy; Jumisko-Pyykkö, Satu

    2009-02-01

    The focus of 3D television and video has been in technical development while hardly any attention has been paid on user expectations and needs of related applications. The object of the study is to examine user requirements for mobile 3D television and video in depth. We conducted two qualitative studies, focus groups and probe studies, to improve the understanding of user approach. Eight focus groups were carried out with altogether 46 participants focusing on use scenario development. The data-collection of the probe study was done over the period of 4 weeks in the field with nine participants to reveal intrinsic user needs and expectations. Both studies were conducted and analyzed independently so that they did not influence each other. The results of both studies provide novel aspects of users, system and content, and context of use. In the paper, we present personas as first archetype users of mobile 3D television and video. Putting these personas into contexts, we summarize the results of our studies and previous related work in the form of use scenarios to guide the user-centered development of 3D television and video.

  5. Real-time 3D video compression for tele-immersive environments

    NASA Astrophysics Data System (ADS)

    Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William

    2006-01-01

    Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).

  6. Toward 3D-IPTV: design and implementation of a stereoscopic and multiple-perspective video streaming system

    NASA Astrophysics Data System (ADS)

    Petrovic, Goran; Farin, Dirk; de With, Peter H. N.

    2008-02-01

    3D-Video systems allow a user to perceive depth in the viewed scene and to display the scene from arbitrary viewpoints interactively and on-demand. This paper presents a prototype implementation of a 3D-video streaming system using an IP network. The architecture of our streaming system is layered, where each information layer conveys a single coded video signal or coded scene-description data. We demonstrate the benefits of a layered architecture with two examples: (a) stereoscopic video streaming, (b) monoscopic video streaming with remote multiple-perspective rendering. Our implementation experiments confirm that prototyping 3D-video streaming systems is possible with today's software and hardware. Furthermore, our current operational prototype demonstrates that highly heterogeneous clients can coexist in the system, ranging from auto-stereoscopic 3D displays to resource-constrained mobile devices.

  7. PB3D: A new code for edge 3-D ideal linear peeling-ballooning stability

    NASA Astrophysics Data System (ADS)

    Weyens, T.; Sánchez, R.; Huijsmans, G.; Loarte, A.; García, L.

    2017-02-01

    A new numerical code PB3D (Peeling-Ballooning in 3-D) is presented. It implements and solves the intermediate-to-high-n ideal linear magnetohydrodynamic stability theory extended to full edge 3-D magnetic toroidal configurations in previous work [1]. The features that make PB3D unique are the assumptions on the perturbation structure through intermediate-to-high mode numbers n in general 3-D configurations, while allowing for displacement of the plasma edge. This makes PB3D capable of very efficient calculations of the full 3-D stability for the output of multiple equilibrium codes. As first verification, it is checked that results from the stability code MISHKA [2], which considers axisymmetric equilibrium configurations, are accurately reproduced, and these are then successfully extended to 3-D configurations, through comparison with COBRA [3], as well as using checks on physical consistency. The non-intuitive 3-D results presented serve as a tentative first proof of the capabilities of the code.

  8. [Evaluation of Motion Sickness Induced by 3D Video Clips].

    PubMed

    Matsuura, Yasuyuki; Takada, Hiroki

    2016-01-01

    The use of stereoscopic images has been spreading rapidly. Nowadays, stereoscopic movies are nothing new to people. Stereoscopic systems date back to 280 A.D. when Euclid first recognized the concept of depth perception by humans. Despite the increase in the production of three-dimensional (3D) display products and many studies on stereoscopic vision, the effect of stereoscopic vision on the human body has been insufficiently understood. However, symptoms such as eye fatigue and 3D sickness have been the concerns when viewing 3D films for a prolonged period of time; therefore, it is important to consider the safety of viewing virtual 3D contents as a contribution to society. It is generally explained to the public that accommodation and convergence are mismatched during stereoscopic vision and that this is the main reason for the visual fatigue and visually induced motion sickness (VIMS) during 3D viewing. We have devised a method to simultaneously measure lens accommodation and convergence. We used this simultaneous measurement device to characterize 3D vision. Fixation distance was compared between accommodation and convergence during the viewing of 3D films with repeated measurements. Time courses of these fixation distances and their distributions were compared in subjects who viewed 2D and 3D video clips. The results indicated that after 90 s of continuously viewing 3D images, the accommodative power does not correspond to the distance of convergence. In this paper, remarks on methods to measure the severity of motion sickness induced by viewing 3D films are also given. From the epidemiological viewpoint, it is useful to obtain novel knowledge for reduction and/or prevention of VIMS. We should accumulate empirical data on motion sickness, which may contribute to the development of relevant fields in science and technology.

  9. Image sequence coding using 3D scene models

    NASA Astrophysics Data System (ADS)

    Girod, Bernd

    1994-09-01

    The implicit and explicit use of 3D models for image sequence coding is discussed. For implicit use, a 3D model can be incorporated into motion compensating prediction. A scheme that estimates the displacement vector field with a rigid body motion constraint by recovering epipolar lines from an unconstrained displacement estimate and then repeating block matching along the epipolar line is proposed. Experimental results show that an improved displacement vector field can be obtained with a rigid body motion constraint. As an example for explicit use, various results with a facial animation model for videotelephony are discussed. A 13 X 16 B-spline mask can be adapted automatically to individual faces and is used to generate facial expressions based on FACS. A depth-from-defocus range camera suitable for real-time facial motion tracking is described. Finally, the real-time facial animation system `Traugott' is presented that has been used to generate several hours of broadcast video. Experiments suggest that a videophone system based on facial animation might require a transmission bitrate of 1 kbit/s or below.

  10. Visual fatigue evaluation based on depth in 3D videos

    NASA Astrophysics Data System (ADS)

    Wang, Feng-jiao; Sang, Xin-zhu; Liu, Yangdong; Shi, Guo-zhong; Xu, Da-xiong

    2013-08-01

    In recent years, 3D technology has become an emerging industry. However, visual fatigue always impedes the development of 3D technology. In this paper we propose some factors affecting human perception of depth as new quality metrics. These factors are from three aspects of 3D video--spatial characteristics, temporal characteristics and scene movement characteristics. They play important roles for the viewer's visual perception. If there are many objects with a certain velocity and the scene changes fast, viewers will feel uncomfortable. In this paper, we propose a new algorithm to calculate the weight values of these factors and analyses their effect on visual fatigue.MSE (Mean Square Error) of different blocks is taken into consideration from the frame and inter-frame for 3D stereoscopic videos. The depth frame is divided into a number of blocks. There are overlapped and sharing pixels (at half of the block) in the horizontal and vertical direction. Ignoring edge information of objects in the image can be avoided. Then the distribution of all these data is indicated by kurtosis with regard of regions which human eye may mainly gaze at. Weight values can be gotten by the normalized kurtosis. When the method is used for individual depth, spatial variation can be achieved. When we use it in different frames between current and previous one, we can get temporal variation and scene movement variation. Three factors above are linearly combined, so we can get objective assessment value of 3D videos directly. The coefficients of three factors can be estimated based on the liner regression. At last, the experimental results show that the proposed method exhibits high correlation with subjective quality assessment results.

  11. Saliency detection for videos using 3D FFT local spectra

    NASA Astrophysics Data System (ADS)

    Long, Zhiling; AlRegib, Ghassan

    2015-03-01

    Bottom-up spatio-temporal saliency detection identifies perceptually important regions of interest in video sequences. The center-surround model proves to be useful for visual saliency detection. In this work, we explore using 3D FFT local spectra as features for saliency detection within the center-surround framework. We develop a spectral location based decomposition scheme to divide a 3D FFT cube into two components, one related to temporal changes and the other related to spatial changes. Temporal saliency and spatial saliency are detected separately using features derived from each spectral component through a simple center-surround comparison method. The two detection results are then combined to yield a saliency map. We apply the same detection algorithm to different color channels (YIQ) and incorporate the results into the final saliency determination. The proposed technique is tested with the public CRCNS database. Both visual and numerical evaluations verify the promising performance of our technique.

  12. Automatic detection of artifacts in converted S3D video

    NASA Astrophysics Data System (ADS)

    Bokov, Alexander; Vatolin, Dmitriy; Zachesov, Anton; Belous, Alexander; Erofeev, Mikhail

    2014-03-01

    In this paper we present algorithms for automatically detecting issues specific to converted S3D content. When a depth-image-based rendering approach produces a stereoscopic image, the quality of the result depends on both the depth maps and the warping algorithms. The most common problem with converted S3D video is edge-sharpness mismatch. This artifact may appear owing to depth-map blurriness at semitransparent edges: after warping, the object boundary becomes sharper in one view and blurrier in the other, yielding binocular rivalry. To detect this problem we estimate the disparity map, extract boundaries with noticeable differences, and analyze edge-sharpness correspondence between views. We pay additional attention to cases involving a complex background and large occlusions. Another problem is detection of scenes that lack depth volume: we present algorithms for detecting at scenes and scenes with at foreground objects. To identify these problems we analyze the features of the RGB image as well as uniform areas in the depth map. Testing of our algorithms involved examining 10 Blu-ray 3D releases with converted S3D content, including Clash of the Titans, The Avengers, and The Chronicles of Narnia: The Voyage of the Dawn Treader. The algorithms we present enable improved automatic quality assessment during the production stage.

  13. Multitasking the code ARC3D. [for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Barton, John T.; Hsiung, Christopher C.

    1986-01-01

    The CRAY multitasking system was developed in order to utilize all four processors and sharply reduce the wall clock run time. This paper describes the techniques used to modify the computational fluid dynamics code ARC3D for this run and analyzes the achieved speedup. The ARC3D code solves either the Euler or thin-layer N-S equations using an implicit approximate factorization scheme. Results indicate that multitask processing can be used to achieve wall clock speedup factors of over three times, depending on the nature of the program code being used. Multitasking appears to be particularly advantageous for large-memory problems running on multiple CPU computers.

  14. Interface requirements to couple thermal-hydraulic codes to 3D neutronic codes

    SciTech Connect

    Langenbuch, S.; Austregesilo, H.; Velkov, K.

    1997-07-01

    The present situation of thermalhydraulics codes and 3D neutronics codes is briefly described and general considerations for coupling of these codes are discussed. Two different basic approaches of coupling are identified and their relative advantages and disadvantages are discussed. The implementation of the coupling for 3D neutronics codes in the system ATHLET is presented. Meanwhile, this interface is used for coupling three different 3D neutronics codes.

  15. Video coding with dynamic background

    NASA Astrophysics Data System (ADS)

    Paul, Manoranjan; Lin, Weisi; Lau, Chiew Tong; Lee, Bu-Sung

    2013-12-01

    Motion estimation (ME) and motion compensation (MC) using variable block size, sub-pixel search, and multiple reference frames (MRFs) are the major reasons for improved coding performance of the H.264 video coding standard over other contemporary coding standards. The concept of MRFs is suitable for repetitive motion, uncovered background, non-integer pixel displacement, lighting change, etc. The requirement of index codes of the reference frames, computational time in ME & MC, and memory buffer for coded frames limits the number of reference frames used in practical applications. In typical video sequences, the previous frame is used as a reference frame with 68-92% of cases. In this article, we propose a new video coding method using a reference frame [i.e., the most common frame in scene (McFIS)] generated by dynamic background modeling. McFIS is more effective in terms of rate-distortion and computational time performance compared to the MRFs techniques. It has also inherent capability of scene change detection (SCD) for adaptive group of picture (GOP) size determination. As a result, we integrate SCD (for GOP determination) with reference frame generation. The experimental results show that the proposed coding scheme outperforms the H.264 video coding with five reference frames and the two relevant state-of-the-art algorithms by 0.5-2.0 dB with less computational time.

  16. Real-time 3D video conference on generic hardware

    NASA Astrophysics Data System (ADS)

    Desurmont, X.; Bruyelle, J. L.; Ruiz, D.; Meessen, J.; Macq, B.

    2007-02-01

    Nowadays, video-conference tends to be more and more advantageous because of the economical and ecological cost of transport. Several platforms exist. The goal of the TIFANIS immersive platform is to let users interact as if they were physically together. Unlike previous teleimmersion systems, TIFANIS uses generic hardware to achieve an economically realistic implementation. The basic functions of the system are to capture the scene, transmit it through digital networks to other partners, and then render it according to each partner's viewing characteristics. The image processing part should run in real-time. We propose to analyze the whole system. it can be split into different services like central processing unit (CPU), graphical rendering, direct memory access (DMA), and communications trough the network. Most of the processing is done by CPU resource. It is composed of the 3D reconstruction and the detection and tracking of faces from the video stream. However, the processing needs to be parallelized in several threads that have as little dependencies as possible. In this paper, we present these issues, and the way we deal with them.

  17. Holovideo: Real-time 3D range video encoding and decoding on GPU

    NASA Astrophysics Data System (ADS)

    Karpinsky, Nikolaus; Zhang, Song

    2012-02-01

    We present a 3D video-encoding technique called Holovideo that is capable of encoding high-resolution 3D videos into standard 2D videos, and then decoding the 2D videos back into 3D rapidly without significant loss of quality. Due to the nature of the algorithm, 2D video compression such as JPEG encoding with QuickTime Run Length Encoding (QTRLE) can be applied with little quality loss, resulting in an effective way to store 3D video at very small file sizes. We found that under a compression ratio of 134:1, Holovideo to OBJ file format, the 3D geometry quality drops at a negligible level. Several sets of 3D videos were captured using a structured light scanner, compressed using the Holovideo codec, and then uncompressed and displayed to demonstrate the effectiveness of the codec. With the use of OpenGL Shaders (GLSL), the 3D video codec can encode and decode in realtime. We demonstrated that for a video size of 512×512, the decoding speed is 28 frames per second (FPS) with a laptop computer using an embedded NVIDIA GeForce 9400 m graphics processing unit (GPU). Encoding can be done with this same setup at 18 FPS, making this technology suitable for applications such as interactive 3D video games and 3D video conferencing.

  18. Video Coding for ESL.

    ERIC Educational Resources Information Center

    King, Kevin

    1992-01-01

    Coding tasks, a valuable technique for teaching English as a Second Language, are presented that enable students to look at patterns and structures of marital communication as well as objectively evaluate the degree of happiness or distress in the marriage. (seven references) (JL)

  19. RELAP5-3D code validation for RBMK phenomena

    SciTech Connect

    Fisher, J.E.

    1999-09-01

    The RELAP5-3D thermal-hydraulic code was assessed against Japanese Safety Experiment Loop (SEL) and Heat Transfer Loop (HTL) tests. These tests were chosen because the phenomena present are applicable to analyses of Russian RBMK reactor designs. The assessment cases included parallel channel flow fluctuation tests at reduced and normal water levels, a channel inlet pipe rupture test, and a high power, density wave oscillation test. The results showed that RELAP5-3D has the capability to adequately represent these RBMK-related phenomena.

  20. RELAP5-3D Code Validation for RBMK Phenomena

    SciTech Connect

    Fisher, James Ebberly

    1999-09-01

    The RELAP5-3D thermal-hydraulic code was assessed against Japanese Safety Experiment Loop (SEL) and Heat Transfer Loop (HTL) tests. These tests were chosen because the phenomena present are applicable to analyses of Russian RBMK reactor designs. The assessment cases included parallel channel flow fluctuation tests at reduced and normal water levels, a channel inlet pipe rupture test, and a high power, density wave oscillation test. The results showed that RELAP5-3D has the capability to adequately represent these RBMK-related phenomena.

  1. VISRAD, 3-D Target Design and Radiation Simulation Code

    NASA Astrophysics Data System (ADS)

    Golovkin, Igor; Macfarlane, Joseph; Golovkina, Viktoriya

    2016-10-01

    The 3-D view factor code VISRAD is widely used in designing HEDP experiments at major laser and pulsed-power facilities, including NIF, OMEGA, OMEGA-EP, ORION, LMJ, Z, and PLX. It simulates target designs by generating a 3-D grid of surface elements, utilizing a variety of 3-D primitives and surface removal algorithms, and can be used to compute the radiation flux throughout the surface element grid by computing element-to-element view factors and solving power balance equations. Target set-up and beam pointing are facilitated by allowing users to specify positions and angular orientations using a variety of coordinates systems (e.g., that of any laser beam, target component, or diagnostic port). Analytic modeling for laser beam spatial profiles for OMEGA DPPs and NIF CPPs is used to compute laser intensity profiles throughout the grid of surface elements. We will discuss recent improvements to the software package and plans for future developments.

  2. Beam Optics Analysis - An Advanced 3D Trajectory Code

    SciTech Connect

    Ives, R. Lawrence; Bui, Thuc; Vogler, William; Neilson, Jeff; Read, Mike; Shephard, Mark; Bauer, Andrew; Datta, Dibyendu; Beal, Mark

    2006-01-03

    Calabazas Creek Research, Inc. has completed initial development of an advanced, 3D program for modeling electron trajectories in electromagnetic fields. The code is being used to design complex guns and collectors. Beam Optics Analysis (BOA) is a fully relativistic, charged particle code using adaptive, finite element meshing. Geometrical input is imported from CAD programs generating ACIS-formatted files. Parametric data is inputted using an intuitive, graphical user interface (GUI), which also provides control of convergence, accuracy, and post processing. The program includes a magnetic field solver, and magnetic information can be imported from Maxwell 2D/3D and other programs. The program supports thermionic emission and injected beams. Secondary electron emission is also supported, including multiple generations. Work on field emission is in progress as well as implementation of computer optimization of both the geometry and operating parameters. The principle features of the program and its capabilities are presented.

  3. CALTRANS: A parallel, deterministic, 3D neutronics code

    SciTech Connect

    Carson, L.; Ferguson, J.; Rogers, J.

    1994-04-01

    Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.

  4. Towards a 3D Space Radiation Transport Code

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tripathl, R. K.; Cicomptta, F. A.; Heinbockel, J. H.; Tweed, J.

    2002-01-01

    High-speed computational procedures for space radiation shielding have relied on asymptotic expansions in terms of the off-axis scatter and replacement of the general geometry problem by a collection of flat plates. This type of solution was derived for application to human rated systems in which the radius of the shielded volume is large compared to the off-axis diffusion limiting leakage at lateral boundaries. Over the decades these computational codes are relatively complete and lateral diffusion effects are now being added. The analysis for developing a practical full 3D space shielding code is presented.

  5. Streamlining of the RELAP5-3D Code

    SciTech Connect

    Mesina, George L; Hykes, Joshua; Guillen, Donna Post

    2007-11-01

    RELAP5-3D is widely used by the nuclear community to simulate general thermal hydraulic systems and has proven to be so versatile that the spectrum of transient two-phase problems that can be analyzed has increased substantially over time. To accommodate the many new types of problems that are analyzed by RELAP5-3D, both the physics and numerical methods of the code have been continuously improved. In the area of computational methods and mathematical techniques, many upgrades and improvements have been made decrease code run time and increase solution accuracy. These include vectorization, parallelization, use of improved equation solvers for thermal hydraulics and neutron kinetics, and incorporation of improved library utilities. In the area of applied nuclear engineering, expanded capabilities include boron and level tracking models, radiation/conduction enclosure model, feedwater heater and compressor components, fluids and corresponding correlations for modeling Generation IV reactor designs, and coupling to computational fluid dynamics solvers. Ongoing and proposed future developments include improvements to the two-phase pump model, conversion to FORTRAN 90, and coupling to more computer programs. This paper summarizes the general improvements made to RELAP5-3D, with an emphasis on streamlining the code infrastructure for improved maintenance and development. With all these past, present and planned developments, it is necessary to modify the code infrastructure to incorporate modifications in a consistent and maintainable manner. Modifying a complex code such as RELAP5-3D to incorporate new models, upgrade numerics, and optimize existing code becomes more difficult as the code grows larger. The difficulty of this as well as the chance of introducing errors is significantly reduced when the code is structured. To streamline the code into a structured program, a commercial restructuring tool, FOR_STRUCT, was applied to the RELAP5-3D source files. The

  6. Preliminary investigations on 3D PIC simulation of DPHC structure using NEPTUNE3D code

    NASA Astrophysics Data System (ADS)

    Zhao, Hailong; Dong, Ye; Zhou, Haijing; Zou, Wenkang; Wang, Qiang

    2016-10-01

    Cubic region (34cm × 34cm × 18cm) including the double post-hole convolute (DPHC) structure was chosen to perform a series of fully 3D PIC simulations using NEPTUNE3D codes, massive data ( 200GB) could be acquired and solved in less than 5 hours. Cold-chamber tests were performed during which only cathode electron emission was considered without temperature rise or ion emission, current loss efficiency was estimated by comparisons between output magnetic field profiles with or without electron emission. PIC simulation results showed three stages of current transforming process with election emission in DPHC structure, the maximum ( 20%) current loss was 437kA at 15ns, while only 0.46% 0.48% was lost when driving current reached its peak. DPHC structure proved valuable functions during energy transform process in PTS facility, and NEPTUNE3D provided tools to explore this sophisticated physics. Project supported by the National Natural Science Foundation of China, Grant No. 11571293, 11505172.

  7. Axisymmetric Implementation for 3D-Based DSMC Codes

    NASA Technical Reports Server (NTRS)

    Stewart, Benedicte; Lumpkin, F. E.; LeBeau, G. J.

    2011-01-01

    The primary objective in developing NASA s DSMC Analysis Code (DAC) was to provide a high fidelity modeling tool for 3D rarefied flows such as vacuum plume impingement and hypersonic re-entry flows [1]. The initial implementation has been expanded over time to offer other capabilities including a novel axisymmetric implementation. Because of the inherently 3D nature of DAC, this axisymmetric implementation uses a 3D Cartesian domain and 3D surfaces. Molecules are moved in all three dimensions but their movements are limited by physical walls to a small wedge centered on the plane of symmetry (Figure 1). Unfortunately, far from the axis of symmetry, the cell size in the direction perpendicular to the plane of symmetry (the Z-direction) may become large compared to the flow mean free path. This frequently results in inaccuracies in these regions of the domain. A new axisymmetric implementation is presented which aims to solve this issue by using Bird s approach for the molecular movement while preserving the 3D nature of the DAC software [2]. First, the computational domain is similar to that previously used such that a wedge must still be used to define the inflow surface and solid walls within the domain. As before molecules are created inside the inflow wedge triangles but they are now rotated back to the symmetry plane. During the move step, molecules are moved in 3D but instead of interacting with the wedge walls, the molecules are rotated back to the plane of symmetry at the end of the move step. This new implementation was tested for multiple flows over axisymmetric shapes, including a sphere, a cone, a double cone and a hollow cylinder. Comparisons to previous DSMC solutions and experiments, when available, are made.

  8. Embedded wavelet video coding with error concealment

    NASA Astrophysics Data System (ADS)

    Chang, Pao-Chi; Chen, Hsiao-Ching; Lu, Ta-Te

    2000-04-01

    We present an error-concealed embedded wavelet (ECEW) video coding system for transmission over Internet or wireless networks. This system consists of two types of frames: intra (I) frames and inter, or predicted (P), frames. Inter frames are constructed by the residual frames formed by variable block-size multiresolution motion estimation (MRME). Motion vectors are compressed by arithmetic coding. The image data of intra frames and residual frames are coded by error-resilient embedded zerotree wavelet (ER-EZW) coding. The ER-EZW coding partitions the wavelet coefficients into several groups and each group is coded independently. Therefore, the error propagation effect resulting from an error is only confined in a group. In EZW coding any single error may result in a totally undecodable bitstream. To further reduce the error damage, we use the error concealment at the decoding end. In intra frames, the erroneous wavelet coefficients are replaced by neighbors. In inter frames, erroneous blocks of wavelet coefficients are replaced by data from the previous frame. Simulations show that the performance of ECEW is superior to ECEW without error concealment by 7 to approximately 8 dB at the error-rate of 10-3 in intra frames. The improvement still has 2 to approximately 3 dB at a higher error-rate of 10-2 in inter frames.

  9. Comparative analysis of video processing and 3D rendering for cloud video games using different virtualization technologies

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos

    2014-05-01

    This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.

  10. Joint Adaptive Pre-processing Resilience and Post-processing Concealment Schemes for 3D Video Transmission

    NASA Astrophysics Data System (ADS)

    El-Shafai, Walid

    2015-03-01

    3D video transmission over erroneous networks is still a considerable issue due to restricted resources and the presence of severe channel errors. Efficiently compressing 3D video with low transmission rate, while maintaining a high quality of received 3D video, is very challenging. Since it is not plausible to re-transmit all the corrupted macro-blocks (MBs) due to real time applications and limited resources. Thus it is mandatory to retrieve the lost MBs at the decoder side using sufficient post-processing schemes, such as error concealment (EC). In this paper, we propose an adaptive multi-mode EC (AMMEC) algorithm at the decoder based on utilizing pre-processing flexible macro-block ordering error resilience (FMO-ER) technique at the encoder; to efficiently conceal the erroneous MBs of intra and inter coded frames of 3D video. Experimental simulation results show that the proposed FMO-ER/AMMEC schemes can significantly improve the objective and subjective 3D video quality.

  11. FARGO3D: A NEW GPU-ORIENTED MHD CODE

    SciTech Connect

    Benitez-Llambay, Pablo; Masset, Frédéric S. E-mail: masset@icf.unam.mx

    2016-03-15

    We present the FARGO3D code, recently publicly released. It is a magnetohydrodynamics code developed with special emphasis on the physics of protoplanetary disks and planet–disk interactions, and parallelized with MPI. The hydrodynamics algorithms are based on finite-difference upwind, dimensionally split methods. The magnetohydrodynamics algorithms consist of the constrained transport method to preserve the divergence-free property of the magnetic field to machine accuracy, coupled to a method of characteristics for the evaluation of electromotive forces and Lorentz forces. Orbital advection is implemented, and an N-body solver is included to simulate planets or stars interacting with the gas. We present our implementation in detail and present a number of widely known tests for comparison purposes. One strength of FARGO3D is that it can run on either graphical processing units (GPUs) or central processing units (CPUs), achieving large speed-up with respect to CPU cores. We describe our implementation choices, which allow a user with no prior knowledge of GPU programming to develop new routines for CPUs, and have them translated automatically for GPUs.

  12. Research and Technology Development for Construction of 3d Video Scenes

    NASA Astrophysics Data System (ADS)

    Khlebnikova, Tatyana A.

    2016-06-01

    For the last two decades surface information in the form of conventional digital and analogue topographic maps has been being supplemented by new digital geospatial products, also known as 3D models of real objects. It is shown that currently there are no defined standards for 3D scenes construction technologies that could be used by Russian surveying and cartographic enterprises. The issues regarding source data requirements, their capture and transferring to create 3D scenes have not been defined yet. The accuracy issues for 3D video scenes used for measuring purposes can hardly ever be found in publications. Practicability of development, research and implementation of technology for construction of 3D video scenes is substantiated by 3D video scene capability to expand the field of data analysis application for environmental monitoring, urban planning, and managerial decision problems. The technology for construction of 3D video scenes with regard to the specified metric requirements is offered. Technique and methodological background are recommended for this technology used to construct 3D video scenes based on DTM, which were created by satellite and aerial survey data. The results of accuracy estimation of 3D video scenes are presented.

  13. Depth estimation from multiple coded apertures for 3D interaction

    NASA Astrophysics Data System (ADS)

    Suh, Sungjoo; Choi, Changkyu; Park, Dusik

    2013-09-01

    In this paper, we propose a novel depth estimation method from multiple coded apertures for 3D interaction. A flat panel display is transformed into lens-less multi-view cameras which consist of multiple coded apertures. The sensor panel behind the display captures the scene in front of the display through the imaging pattern of the modified uniformly redundant arrays (MURA) on the display panel. To estimate the depth of an object in the scene, we first generate a stack of synthetically refocused images at various distances by using the shifting and averaging approach for the captured coded images. And then, an initial depth map is obtained by applying a focus operator to a stack of the refocused images for each pixel. Finally, the depth is refined by fitting a parametric focus model to the response curves near the initial depth estimates. To demonstrate the effectiveness of the proposed algorithm, we construct an imaging system to capture the scene in front of the display. The system consists of a display screen and an x-ray detector without a scintillator layer so as to act as a visible sensor panel. Experimental results confirm that the proposed method accurately determines the depth of an object including a human hand in front of the display by capturing multiple MURA coded images, generating refocused images at different depth levels, and refining the initial depth estimates.

  14. The CONV-3D code for DNS CFD calculation

    NASA Astrophysics Data System (ADS)

    Chudanov, Vladimir; ALCF ThermHydraX Team

    2014-03-01

    The CONV-3D code for DNS CFD calculation of thermal and hydrodynamics on Fast Reactor with use of supercomputers is developed. This code is highly effective in a scalability at the high performance computers such as ``Chebyshev'', ``Lomonosov'' (Moscow State University, Russia), Blue Gene/Q(ALCF MIRA, ANL). The scalability is reached up to 106 processors. The code was validated on a series of the well known tests in a wide range of Rayleigh (106-1016) and Reynolds (103-105. Such code was validated on the blind tests OECD/NEA of the turbulent intermixing in horizontal subchannels of the fuel assembly at normal pressure and temperature (Matis-H), of the flows in T-junction and the report IBRAE/ANL was published. The good coincidence of numerical predictions with experimental data was reached, that specifies applicability of the developed approach for a prediction of thermal and hydrodynamics in a boundary layer at small Prandtl that is characteristic of the liquid metal reactors. Project Name: ThermHydraX. Project Title: U.S.-Russia Collaboration on Cross-Verification and Validation in Thermal Hydraulics.

  15. MOM3D/EM-ANIMATE - MOM3D WITH ANIMATION CODE

    NASA Technical Reports Server (NTRS)

    Shaeffer, J. F.

    1994-01-01

    compare surface-current distribution due to various initial excitation directions or electric field orientations. The program can accept up to 50 planes of field data consisting of a grid of 100 by 100 field points. These planes of data are user selectable and can be viewed individually or concurrently. With these preset limits, the program requires 55 megabytes of core memory to run. These limits can be changed in the header files to accommodate the available core memory of an individual workstation. An estimate of memory required can be made as follows: approximate memory in bytes equals (number of nodes times number of surfaces times 14 variables times bytes per word, typically 4 bytes per floating point) plus (number of field planes times number of nodes per plane times 21 variables times bytes per word). This gives the approximate memory size required to store the field and surface-current data. The total memory size is approximately 400,000 bytes plus the data memory size. The animation calculations are performed in real time at any user set time step. For Silicon Graphics Workstations that have multiple processors, this program has been optimized to perform these calculations on multiple processors to increase animation rates. The optimized program uses the SGI PFA (Power FORTRAN Accelerator) library. On single processor machines, the parallelization directives are seen as comments to the program and will have no effect on compilation or execution. MOM3D and EM-ANIMATE are written in FORTRAN 77 for interactive or batch execution on SGI series computers running IRIX 3.0 or later. The RAM requirements for these programs vary with the size of the problem being solved. A minimum of 30Mb of RAM is required for execution of EM-ANIMATE; however, the code may be modified to accommodate the available memory of an individual workstation. For EM-ANIMATE, twenty-four bit, double-buffered color capability is suggested, but not required. Sample executables and sample input and

  16. RHALE: A 3-D MMALE code for unstructured grids

    SciTech Connect

    Peery, J.S.; Budge, K.G.; Wong, M.K.W.; Trucano, T.G.

    1993-08-01

    This paper describes RHALE, a multi-material arbitrary Lagrangian-Eulerian (MMALE) shock physics code. RHALE is the successor to CTH, Sandia`s 3-D Eulerian shock physics code, and will be capable of solving problems that CTH cannot adequately address. We discuss the Lagrangian solid mechanics capabilities of RHALE, which include arbitrary mesh connectivity, superior artificial viscosity, and improved material models. We discuss the MMALE algorithms that have been extended for arbitrary grids in both two- and three-dimensions. The MMALE addition to RHALE provides the accuracy of a Lagrangian code while allowing a calculation to proceed under very large material distortions. Coupling an arbitrary quadrilateral or hexahedral grid to the MMALE solution facilitates modeling of complex shapes with a greatly reduced number of computational cells. RHALE allows regions of a problem to be modeled with Lagrangian, Eulerian or ALE meshes. In addition, regions can switch from Lagrangian to ALE to Eulerian based on user input or mesh distortion. For ALE meshes, new node locations are determined with a variety of element based equipotential schemes. Element quantities are advected with donor, van Leer, or Super-B algorithms. Nodal quantities are advected with the second order SHALE or HIS algorithms. Material interfaces are determined with a modified Young`s high resolution interface tracker or the SLIC algorithm. RHALE has been used to model many problems of interest to the mechanics, hypervelocity impact, and shock physics communities. Results of a sampling of these problems are presented in this paper.

  17. Fully scalable video coding with packed stream

    NASA Astrophysics Data System (ADS)

    Lopez, Manuel F.; Rodriguez, Sebastian G.; Ortiz, Juan Pablo; Dana, Jose Miguel; Ruiz, Vicente G.; Garcia, Inmaculada

    2005-03-01

    Scalable video coding is a technique which allows a compressed video stream to be decoded in several different ways. This ability allows a user to adaptively recover a specific version of a video depending on its own requirements. Video sequences have temporal, spatial and quality scalabilities. In this work we introduce a novel fully scalable video codec. It is based on a motion-compensated temporal filtering (MCTF) of the video sequences and it uses some of the basic elements of JPEG 2000. This paper describes several specific proposals for video on demand and video-conferencing applications over non-reliable packet-switching data networks.

  18. Region-based fractal video coding

    NASA Astrophysics Data System (ADS)

    Zhu, Shiping; Belloulata, Kamel

    2008-10-01

    A novel video sequence compression scheme is proposed in order to realize the efficient and economical transmission of video sequence, and also the region-based functionality of MPEG-4. The CPM and NCIM fractal coding scheme is applied on each region independently by a prior image segmentation map (alpha plane) which is exactly the same as defined in MPEG-4. The first n frames of video sequence are encoded as a "set" using the Circular Prediction Mapping (CPM) and encode the remaining frames using the Non Contractive Interframe Mapping (NCIM). The CPM and NCIM accomplish the motion estimation and compensation, which can exploit the high temporal correlations between the adjacent frames of video sequence. The experimental results with the monocular video sequences provide promising performances at low bit rate coding, such as the application in video conference. We believe the proposed fractal video codec will be a powerful and efficient technique for the region-based video sequence coding.

  19. ROI-based transmission method for stereoscopic video to maximize rendered 3D video quality

    NASA Astrophysics Data System (ADS)

    Hewage, Chaminda T. E. R.; Martini, Maria G.; Appuhami, Harsha D.

    2012-03-01

    A technique to improve the rendering quality of novel views for colour plus depth based 3D video is proposed. Most depth discontinuities occur around the edges of depth map objects. If information around edges of both colour and depth map images is lost during transmission, this will affect the quality of the rendered views. Therefore this work proposes a technique to categorize edge and surrounding areas into two different regions (Region Of Interests (ROIs)) and later protect them separately to provide Unequal Error Protection (UEP) during transmission. In this way the most important edge areas (vital for novel view rendering) will be more protected than other surrounding areas. This method is tested over a H.264/AVC based simulcast encoding and transmission setup. The results show improved rendered quality with the proposed ROI-based UEP method compared to Equal Error Protection (EEP) method.

  20. Does training with 3D videos improve decision-making in team invasion sports?

    PubMed

    Hohmann, Tanja; Obelöer, Hilke; Schlapkohl, Nele; Raab, Markus

    2016-01-01

    We examined the effectiveness of video-based decision training in national youth handball teams. Extending previous research, we tested in Study 1 whether a three-dimensional (3D) video training group would outperform a two-dimensional (2D) group. In Study 2, a 3D training group was compared to a control group and a group trained with a traditional tactic board. In both studies, training duration was 6 weeks. Performance was measured in a pre- to post-retention design. The tests consisted of a decision-making task measuring quality of decisions (first and best option) and decision time (time for first and best option). The results of Study 1 showed learning effects and revealed that the 3D video group made faster first-option choices than the 2D group, but differences in the quality of options were not pronounced. The results of Study 2 revealed learning effects for both training groups compared to the control group, and faster choices in the 3D group compared to both other groups. Together, the results show that 3D video training is the most useful tool for improving choices in handball, but only in reference to decision time and not decision quality. We discuss the usefulness of a 3D video tool for training of decision-making skills outside the laboratory or gym.

  1. Performance evaluation of MPEG internet video coding

    NASA Astrophysics Data System (ADS)

    Luo, Jiajia; Wang, Ronggang; Fan, Kui; Wang, Zhenyu; Li, Ge; Wang, Wenmin

    2016-09-01

    Internet Video Coding (IVC) has been developed in MPEG by combining well-known existing technology elements and new coding tools with royalty-free declarations. In June 2015, IVC project was approved as ISO/IEC 14496-33 (MPEG- 4 Internet Video Coding). It is believed that this standard can be highly beneficial for video services in the Internet domain. This paper evaluates the objective and subjective performances of IVC by comparing it against Web Video Coding (WVC), Video Coding for Browsers (VCB) and AVC High Profile. Experimental results show that IVC's compression performance is approximately equal to that of the AVC High Profile for typical operational settings, both for streaming and low-delay applications, and is better than WVC and VCB.

  2. Introducing a Public Stereoscopic 3D High Dynamic Range (SHDR) Video Database

    NASA Astrophysics Data System (ADS)

    Banitalebi-Dehkordi, Amin

    2017-03-01

    High dynamic range (HDR) displays and cameras are paving their ways through the consumer market at a rapid growth rate. Thanks to TV and camera manufacturers, HDR systems are now becoming available commercially to end users. This is taking place only a few years after the blooming of 3D video technologies. MPEG/ITU are also actively working towards the standardization of these technologies. However, preliminary research efforts in these video technologies are hammered by the lack of sufficient experimental data. In this paper, we introduce a Stereoscopic 3D HDR database of videos that is made publicly available to the research community. We explain the procedure taken to capture, calibrate, and post-process the videos. In addition, we provide insights on potential use-cases, challenges, and research opportunities, implied by the combination of higher dynamic range of the HDR aspect, and depth impression of the 3D aspect.

  3. The Effect of Frame Rate on 3D Video Quality and Bitrate

    NASA Astrophysics Data System (ADS)

    Banitalebi-Dehkordi, Amin; Pourazad, Mahsa T.; Nasiopoulos, Panos

    2015-03-01

    Increasing the frame rate of a 3D video generally results in improved Quality of Experience (QoE). However, higher frame rates involve a higher degree of complexity in capturing, transmission, storage, and display. The question that arises here is what frame rate guarantees high viewing quality of experience given the existing/required 3D devices and technologies (3D cameras, 3D TVs, compression, transmission bandwidth, and storage capacity). This question has already been addressed for the case of 2D video, but not for 3D. The objective of this paper is to study the relationship between 3D quality and bitrate at different frame rates. Our performance evaluations show that increasing the frame rate of 3D videos beyond 60 fps may not be visually distinguishable. In addition, our experiments show that when the available bandwidth is reduced, the highest possible 3D quality of experience can be achieved by adjusting (decreasing) the frame rate instead of increasing the compression ratio. The results of our study are of particular interest to network providers for rate adaptation in variable bitrate channels.

  4. 3D neutronic codes coupled with thermal-hydraulic system codes for PWR, and BWR and VVER reactors

    SciTech Connect

    Langenbuch, S.; Velkov, K.; Lizorkin, M.

    1997-07-01

    This paper describes the objectives of code development for coupling 3D neutronics codes with thermal-hydraulic system codes. The present status of coupling ATHLET with three 3D neutronics codes for VVER- and LWR-reactors is presented. After describing the basic features of the 3D neutronic codes BIPR-8 from Kurchatov-Institute, DYN3D from Research Center Rossendorf and QUABOX/CUBBOX from GRS, first applications of coupled codes for different transient and accident scenarios are presented. The need of further investigations is discussed.

  5. The design of red-blue 3D video fusion system based on DM642

    NASA Astrophysics Data System (ADS)

    Fu, Rongguo; Luo, Hao; Lv, Jin; Feng, Shu; Wei, Yifang; Zhang, Hao

    2016-10-01

    Aiming at the uncertainty of traditional 3D video capturing including camera focal lengths, distance and angle parameters between two cameras, a red-blue 3D video fusion system based on DM642 hardware processing platform is designed with the parallel optical axis. In view of the brightness reduction of traditional 3D video, the brightness enhancement algorithm based on human visual characteristics is proposed and the luminance component processing method based on YCbCr color space is also proposed. The BIOS real-time operating system is used to improve the real-time performance. The video processing circuit with the core of DM642 enhances the brightness of the images, then converts the video signals of YCbCr to RGB and extracts the R component from one camera, so does the other video and G, B component are extracted synchronously, outputs 3D fusion images finally. The real-time adjustments such as translation and scaling of the two color components are realized through the serial communication between the VC software and BIOS. The system with the method of adding red-blue components reduces the lost of the chrominance components and makes the picture color saturation reduce to more than 95% of the original. Enhancement algorithm after optimization to reduce the amount of data fusion in the processing of video is used to reduce the fusion time and watching effect is improved. Experimental results show that the system can capture images in near distance, output red-blue 3D video and presents the nice experiences to the audience wearing red-blue glasses.

  6. Efficient Use of Video for 3d Modelling of Cultural Heritage Objects

    NASA Astrophysics Data System (ADS)

    Alsadik, B.; Gerke, M.; Vosselman, G.

    2015-03-01

    Currently, there is a rapid development in the techniques of the automated image based modelling (IBM), especially in advanced structure-from-motion (SFM) and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 - 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.

  7. 3-D localization of gamma ray sources with coded apertures for medical applications

    NASA Astrophysics Data System (ADS)

    Kaissas, I.; Papadimitropoulos, C.; Karafasoulis, K.; Potiriadis, C.; Lambropoulos, C. P.

    2015-09-01

    Several small gamma cameras for radioguided surgery using CdTe or CdZnTe have parallel or pinhole collimators. Coded aperture imaging is a well-known method for gamma ray source directional identification, applied in astrophysics mainly. The increase in efficiency due to the substitution of the collimators by the coded masks renders the method attractive for gamma probes used in radioguided surgery. We have constructed and operationally verified a setup consisting of two CdTe gamma cameras with Modified Uniform Redundant Array (MURA) coded aperture masks of rank 7 and 19 and a video camera. The 3-D position of point-like radioactive sources is estimated via triangulation using decoded images acquired by the gamma cameras. We have also developed code for both fast and detailed simulations and we have verified the agreement between experimental results and simulations. In this paper we present a simulation study for the spatial localization of two point sources using coded aperture masks with rank 7 and 19.

  8. Error resiliency of distributed video coding in wireless video communication

    NASA Astrophysics Data System (ADS)

    Ye, Shuiming; Ouaret, Mourad; Dufaux, Frederic; Ansorge, Michael; Ebrahimi, Touradj

    2008-08-01

    Distributed Video Coding (DVC) is a new paradigm in video coding, based on the Slepian-Wolf and Wyner-Ziv theorems. DVC offers a number of potential advantages: flexible partitioning of the complexity between the encoder and decoder, robustness to channel errors due to intrinsic joint source-channel coding, codec independent scalability, and multi-view coding without communications between the cameras. In this paper, we evaluate the performance of DVC in an error-prone wireless communication environment. We also present a hybrid spatial and temporal error concealment approach for DVC. Finally, we perform a comparison with a state-of-the-art AVC/H.264 video coding scheme in the presence of transmission errors.

  9. A Magnetic Diagnostic Code for 3D Fusion Equilibria

    SciTech Connect

    Samuel A. Lazerson, S. Sakakibara and Y. Suzuki

    2013-03-12

    A synthetic magnetic diagnostics code for fusion equilibria is presented. This code calculates the response of various magnetic diagnostics to the equilibria produced by the VMEC and PIES codes. This allows for treatment of equilibria with both good nested flux surfaces and those with stochastic regions. DIAGNO v2.0 builds upon previous codes through the implementation of a virtual casing principle. The code is validated against a vacuum shot on the Large Helical Device (LHD) where the vertical field was ramped. As an exercise of the code, the diagnostic response for various equilibria are calculated on the LHD.

  10. 3D CT-Video Fusion for Image-Guided Bronchoscopy

    PubMed Central

    Higgins, William E.; Helferty, James P.; Lu, Kongkuo; Merritt, Scott A.; Rai, Lav; Yu, Kun-Chang

    2008-01-01

    Bronchoscopic biopsy of the central-chest lymph nodes is an important step for lung-cancer staging. Before bronchoscopy, the physician first visually assesses a patient’s three-dimensional (3D) computed tomography (CT) chest scan to identify suspect lymph-node sites. Next, during bronchoscopy, the physician guides the bronchoscope to each desired lymph-node site. Unfortunately, the physician has no link between the 3D CT image data and the live video stream provided during bronchoscopy. Thus, the physician must essentially perform biopsy blindly, and the skill levels between different physicians differ greatly. We describe an approach that enables synergistic fusion between the 3D CT data and the bronchoscopic video. Both the integrated planning and guidance system and the internal CT-video registration and fusion methods are described. Phantom, animal, and human studies illustrate the efficacy of the methods. PMID:18096365

  11. MOM3D/EM-ANIMATE - MOM3D WITH ANIMATION CODE

    NASA Technical Reports Server (NTRS)

    Shaeffer, J. F.

    1994-01-01

    MOM3D (LAR-15074) is a FORTRAN method-of-moments electromagnetic analysis algorithm for open or closed 3-D perfectly conducting or resistive surfaces. Radar cross section with plane wave illumination is the prime analysis emphasis; however, provision is also included for local port excitation for computing antenna gain patterns and input impedances. The Electric Field Integral Equation form of Maxwell's equations is solved using local triangle couple basis and testing functions with a resultant system impedance matrix. The analysis emphasis is not only for routine RCS pattern predictions, but also for phenomenological diagnostics: bistatic imaging, currents, and near scattered/total electric fields. The images, currents, and near fields are output in form suitable for animation. MOM3D computes the full backscatter and bistatic radar cross section polarization scattering matrix (amplitude and phase), body currents and near scattered and total fields for plane wave illumination. MOM3D also incorporates a new bistatic k space imaging algorithm for computing down range and down/cross range diagnostic images using only one matrix inversion. MOM3D has been made memory and cpu time efficient by using symmetric matrices, symmetric geometry, and partitioned fixed and variable geometries suitable for design iteration studies. MOM3D may be run interactively or in batch mode on 486 IBM PCs and compatibles, UNIX workstations or larger computers. A 486 PC with 16 megabytes of memory has the potential to solve a 30 square wavelength (containing 3000 unknowns) symmetric configuration. Geometries are described using a triangular mesh input in the form of a list of spatial vertex points and a triangle join connection list. The EM-ANIMATE (LAR-15075) program is a specialized visualization program that displays and animates the near-field and surface-current solutions obtained from an electromagnetics program, in particular, that from MOM3D. The EM-ANIMATE program is windows based and

  12. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    PubMed Central

    Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions. PMID:26999741

  13. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding.

    PubMed

    Gao, Yuan; Liu, Pengyu; Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions.

  14. A Novel 2D-to-3D Video Conversion Method Using Time-Coherent Depth Maps

    PubMed Central

    Yin, Shouyi; Dong, Hao; Jiang, Guangli; Liu, Leibo; Wei, Shaojun

    2015-01-01

    In this paper, we propose a novel 2D-to-3D video conversion method for 3D entertainment applications. 3D entertainment is getting more and more popular and can be found in many contexts, such as TV and home gaming equipment. 3D image sensors are a new method to produce stereoscopic video content conveniently and at a low cost, and can thus meet the urgent demand for 3D videos in the 3D entertaiment market. Generally, 2D image sensor and 2D-to-3D conversion chip can compose a 3D image sensor. Our study presents a novel 2D-to-3D video conversion algorithm which can be adopted in a 3D image sensor. In our algorithm, a depth map is generated by combining global depth gradient and local depth refinement for each frame of 2D video input. Global depth gradient is computed according to image type while local depth refinement is related to color information. As input 2D video content consists of a number of video shots, the proposed algorithm reuses the global depth gradient of frames within the same video shot to generate time-coherent depth maps. The experimental results prove that this novel method can adapt to different image types, reduce computational complexity and improve the temporal smoothness of generated 3D video. PMID:26131674

  15. Nonlinear 3D MHD verification study: SpeCyl and PIXIE3D codes for RFP and Tokamak plasmas

    NASA Astrophysics Data System (ADS)

    Bonfiglio, D.; Cappello, S.; Chacon, L.

    2010-11-01

    A strong emphasis is presently placed in the fusion community on reaching predictive capability of computational models. An essential requirement of such endeavor is the process of assessing the mathematical correctness of computational tools, termed verification [1]. We present here a successful nonlinear cross-benchmark verification study between the 3D nonlinear MHD codes SpeCyl [2] and PIXIE3D [3]. Excellent quantitative agreement is obtained in both 2D and 3D nonlinear visco-resistive dynamics for reversed-field pinch (RFP) and tokamak configurations [4]. RFP dynamics, in particular, lends itself as an ideal non trivial test-bed for 3D nonlinear verification. Perspectives for future application of the fully-implicit parallel code PIXIE3D to RFP physics, in particular to address open issues on RFP helical self-organization, will be provided. [4pt] [1] M. Greenwald, Phys. Plasmas 17, 058101 (2010) [0pt] [2] S. Cappello and D. Biskamp, Nucl. Fusion 36, 571 (1996) [0pt] [3] L. Chac'on, Phys. Plasmas 15, 056103 (2008) [0pt] [4] D. Bonfiglio, L. Chac'on and S. Cappello, Phys. Plasmas 17 (2010)

  16. Recent Improvements To The RELAP5-3D Code

    SciTech Connect

    Richard A. Riemke; Paul D. Bayless; S. Michael Modro

    2006-06-01

    The RELAP5-3D computer program has been recently improved. Changes were made as follows: (1) heat structures are allowed to be decoupled from hydrodynamic components, (2) built-in material properties for heat structures have been made consistent with those in MATPRO and the Nuclear Systems Materials Handbook (they are now documented in the RELAP5-3D manual, (3) Schrock's flow quality correlation is now used for a downward oriented junction from a horizontal volume for the stratification entrainment/pullthrough model.

  17. Moving Human Path Tracking Based on Video Surveillance in 3d Indoor Scenarios

    NASA Astrophysics Data System (ADS)

    Zhou, Yan; Zlatanova, Sisi; Wang, Zhe; Zhang, Yeting; Liu, Liu

    2016-06-01

    Video surveillance systems are increasingly used for a variety of 3D indoor applications. We can analyse human behaviour, discover and avoid crowded areas, monitor human traffic and so forth. In this paper we concentrate on use of surveillance cameras to track and reconstruct the path a person has followed. For the purpose we integrated video surveillance data with a 3D indoor model of the building and develop a single human moving path tracking method. We process the surveillance videos to detected single human moving traces; then we match the depth information of 3D scenes to the constructed 3D indoor network model and define the human traces in the 3D indoor space. Finally, the single human traces extracted from multiple cameras are connected with the help of the connectivity provided by the 3D network model. Using this approach, we can reconstruct the entire walking path. The provided experiments with a single person have verified the effectiveness and robustness of the method.

  18. 3D surface reconstruction based on image stitching from gastric endoscopic video sequence

    NASA Astrophysics Data System (ADS)

    Duan, Mengyao; Xu, Rong; Ohya, Jun

    2013-09-01

    This paper proposes a method for reconstructing 3D detailed structures of internal organs such as gastric wall from endoscopic video sequences. The proposed method consists of the four major steps: Feature-point-based 3D reconstruction, 3D point cloud stitching, dense point cloud creation and Poisson surface reconstruction. Before the first step, we partition one video sequence into groups, where each group consists of two successive frames (image pairs), and each pair in each group contains one overlapping part, which is used as a stitching region. Fist, the 3D point cloud of each group is reconstructed by utilizing structure from motion (SFM). Secondly, a scheme based on SIFT features registers and stitches the obtained 3D point clouds, by estimating the transformation matrix of the overlapping part between different groups with high accuracy and efficiency. Thirdly, we select the most robust SIFT feature points as the seed points, and then obtain the dense point cloud from sparse point cloud via a depth testing method presented by Furukawa. Finally, by utilizing Poisson surface reconstruction, polygonal patches for the internal organs are obtained. Experimental results demonstrate that the proposed method achieves a high accuracy and efficiency for 3D reconstruction of gastric surface from an endoscopic video sequence.

  19. A robust coding scheme for packet video

    NASA Technical Reports Server (NTRS)

    Chen, Yun-Chung; Sayood, Khalid; Nelson, Don J.

    1992-01-01

    A layered packet video coding algorithm based on a progressive transmission scheme is presented. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.

  20. Efficient entropy coding for scalable video coding

    NASA Astrophysics Data System (ADS)

    Choi, Woong Il; Yang, Jungyoup; Jeon, Byeungwoo

    2005-10-01

    The standardization for the scalable extension of H.264 has called for additional functionality based on H.264 standard to support the combined spatio-temporal and SNR scalability. For the entropy coding of H.264 scalable extension, Context-based Adaptive Binary Arithmetic Coding (CABAC) scheme is considered so far. In this paper, we present a new context modeling scheme by using inter layer correlation between the syntax elements. As a result, it improves coding efficiency of entropy coding in H.264 scalable extension. In simulation results of applying the proposed scheme to encoding the syntax element mb_type, it is shown that improvement in coding efficiency of the proposed method is up to 16% in terms of bit saving due to estimation of more adequate probability model.

  1. Rapid 3D video/laser sensing and digital archiving with immediate on-scene feedback for 3D crime scene/mass disaster data collection and reconstruction

    NASA Astrophysics Data System (ADS)

    Altschuler, Bruce R.; Oliver, William R.; Altschuler, Martin D.

    1996-02-01

    We describe a system for rapid and convenient video data acquisition and 3-D numerical coordinate data calculation able to provide precise 3-D topographical maps and 3-D archival data sufficient to reconstruct a 3-D virtual reality display of a crime scene or mass disaster area. Under a joint U.S. army/U.S. Air Force project with collateral U.S. Navy support, to create a 3-D surgical robotic inspection device -- a mobile, multi-sensor robotic surgical assistant to aid the surgeon in diagnosis, continual surveillance of patient condition, and robotic surgical telemedicine of combat casualties -- the technology is being perfected for remote, non-destructive, quantitative 3-D mapping of objects of varied sizes. This technology is being advanced with hyper-speed parallel video technology and compact, very fast laser electro-optics, such that the acquisition of 3-D surface map data will shortly be acquired within the time frame of conventional 2-D video. With simple field-capable calibration, and mobile or portable platforms, the crime scene investigator could set up and survey the entire crime scene, or portions of it at high resolution, with almost the simplicity and speed of video or still photography. The survey apparatus would record relative position, location, and instantly archive thousands of artifacts at the site with 3-D data points capable of creating unbiased virtual reality reconstructions, or actual physical replicas, for the investigators, prosecutors, and jury.

  2. The 3D Human Motion Control Through Refined Video Gesture Annotation

    NASA Astrophysics Data System (ADS)

    Jin, Yohan; Suk, Myunghoon; Prabhakaran, B.

    In the beginning of computer and video game industry, simple game controllers consisting of buttons and joysticks were employed, but recently game consoles are replacing joystick buttons with novel interfaces such as the remote controllers with motion sensing technology on the Nintendo Wii [1] Especially video-based human computer interaction (HCI) technique has been applied to games, and the representative game is 'Eyetoy' on the Sony PlayStation 2. Video-based HCI technique has great benefit to release players from the intractable game controller. Moreover, in order to communicate between humans and computers, video-based HCI is very crucial since it is intuitive, easy to get, and inexpensive. On the one hand, extracting semantic low-level features from video human motion data is still a major challenge. The level of accuracy is really dependent on each subject's characteristic and environmental noises. Of late, people have been using 3D motion-capture data for visualizing real human motions in 3D space (e.g, 'Tiger Woods' in EA Sports, 'Angelina Jolie' in Bear-Wolf movie) and analyzing motions for specific performance (e.g, 'golf swing' and 'walking'). 3D motion-capture system ('VICON') generates a matrix for each motion clip. Here, a column is corresponding to a human's sub-body part and row represents time frames of data capture. Thus, we can extract sub-body part's motion only by selecting specific columns. Different from low-level feature values of video human motion, 3D human motion-capture data matrix are not pixel values, but is closer to human level of semantics.

  3. Verification and Validation of the k-kL Turbulence Model in FUN3D and CFL3D Codes

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Carlson, Jan-Renee; Rumsey, Christopher L.

    2015-01-01

    The implementation of the k-kL turbulence model using multiple computational uid dy- namics (CFD) codes is reported herein. The k-kL model is a two-equation turbulence model based on Abdol-Hamid's closure and Menter's modi cation to Rotta's two-equation model. Rotta shows that a reliable transport equation can be formed from the turbulent length scale L, and the turbulent kinetic energy k. Rotta's equation is well suited for term-by-term mod- eling and displays useful features compared to other two-equation models. An important di erence is that this formulation leads to the inclusion of higher-order velocity derivatives in the source terms of the scale equations. This can enhance the ability of the Reynolds- averaged Navier-Stokes (RANS) solvers to simulate unsteady ows. The present report documents the formulation of the model as implemented in the CFD codes Fun3D and CFL3D. Methodology, veri cation and validation examples are shown. Attached and sepa- rated ow cases are documented and compared with experimental data. The results show generally very good comparisons with canonical and experimental data, as well as matching results code-to-code. The results from this formulation are similar or better than results using the SST turbulence model.

  4. 3D unstructured-mesh radiation transport codes

    SciTech Connect

    Morel, J.

    1997-12-31

    Three unstructured-mesh radiation transport codes are currently being developed at Los Alamos National Laboratory. The first code is ATTILA, which uses an unstructured tetrahedral mesh in conjunction with standard Sn (discrete-ordinates) angular discretization, standard multigroup energy discretization, and linear-discontinuous spatial differencing. ATTILA solves the standard first-order form of the transport equation using source iteration in conjunction with diffusion-synthetic acceleration of the within-group source iterations. DANTE is designed to run primarily on workstations. The second code is DANTE, which uses a hybrid finite-element mesh consisting of arbitrary combinations of hexahedra, wedges, pyramids, and tetrahedra. DANTE solves several second-order self-adjoint forms of the transport equation including the even-parity equation, the odd-parity equation, and a new equation called the self-adjoint angular flux equation. DANTE also offers three angular discretization options: $S{_}n$ (discrete-ordinates), $P{_}n$ (spherical harmonics), and $SP{_}n$ (simplified spherical harmonics). DANTE is designed to run primarily on massively parallel message-passing machines, such as the ASCI-Blue machines at LANL and LLNL. The third code is PERICLES, which uses the same hybrid finite-element mesh as DANTE, but solves the standard first-order form of the transport equation rather than a second-order self-adjoint form. DANTE uses a standard $S{_}n$ discretization in angle in conjunction with trilinear-discontinuous spatial differencing, and diffusion-synthetic acceleration of the within-group source iterations. PERICLES was initially designed to run on workstations, but a version for massively parallel message-passing machines will be built. The three codes will be described in detail and computational results will be presented.

  5. MOM3D method of moments code theory manual

    NASA Technical Reports Server (NTRS)

    Shaeffer, John F.

    1992-01-01

    MOM3D is a FORTRAN algorithm that solves Maxwell's equations as expressed via the electric field integral equation for the electromagnetic response of open or closed three dimensional surfaces modeled with triangle patches. Two joined triangles (couples) form the vector current unknowns for the surface. Boundary conditions are for perfectly conducting or resistive surfaces. The impedance matrix represents the fundamental electromagnetic interaction of the body with itself. A variety of electromagnetic analysis options are possible once the impedance matrix is computed including backscatter radar cross section (RCS), bistatic RCS, antenna pattern prediction for user specified body voltage excitation ports, RCS image projection showing RCS scattering center locations, surface currents excited on the body as induced by specified plane wave excitation, and near field computation for the electric field on or near the body.

  6. 3D filtering technique in presence of additive noise in color videos implemented on DSP

    NASA Astrophysics Data System (ADS)

    Ponomaryov, Volodymyr I.; Montenegro-Monroy, Hector; Palacios, Alfredo

    2014-05-01

    A filtering method for color videos contaminated by additive noise is presented. The proposed framework employs three filtering stages: spatial similarity filtering, neighboring frame denoising, and spatial post-processing smoothing. The difference with other state-of- the-art filtering methods, is that this approach, based on fuzzy logic, analyses basic and related gradient values between neighboring pixels into a 7 fi 7 sliding window in the vicinity of a central pixel in each of the RGB channels. Following, the similarity measures between the analogous pixels in the color bands are taken into account during the denoising. Next, two neighboring video frames are analyzed together estimating local motions between the frames using block matching procedure. In the final stage, the edges and smoothed areas are processed differently in a current frame during the post-processing filtering. Numerous simulations results confirm that this 3D fuzzy filter perform better than other state-of-the- art methods, such as: 3D-LLMMSE, WMVCE, RFMDAF, FDARTF G, VBM3D and NLM, in terms of objective criteria (PSNR, MAE, NCD and SSIM) as well as subjective perception via human vision system in the different color videos. An efficiency analysis of the designed and other mentioned filters have been performed on the DSPs TMS320 DM642 and TMS320DM648 by Texas Instruments through MATLAB and Simulink module showing that the novel 3D fuzzy filter can be used in real-time processing applications.

  7. Integrating Online and Offline 3D Deep Learning for Automated Polyp Detection in Colonoscopy Videos.

    PubMed

    Yu, Lequan; Chen, Hao; Dou, Qi; Qin, Jing; Heng, Pheng Ann

    2016-12-07

    Automated polyp detection in colonoscopy videos has been demonstrated to be a promising way for colorectal cancer (CRC) prevention and diagnosis. Traditional manual screening is time-consuming, operator-dependent and error-prone; hence, automated detection approach is highly demanded in clinical practice. However, automated polyp detection is very challenging due to high intra-class variations in polyp size, color, shape and texture and low inter-class variations between polyps and hard mimics. In this paper, we propose a novel offline and online 3D deep learning integration framework by leveraging the 3D fully convolutional network (3D-FCN) to tackle this challenging problem. Compared with previous methods employing hand-crafted features or 2D-CNNs, the 3D-FCN is capable of learning more representative spatio-temporal features from colonoscopy videos, and hence has more powerful discrimination capability. More importantly, we propose a novel online learning scheme to deal with the problem of limited training data by harnessing the specific information of an input video in the learning process. We integrate offline and online learning to effectively reduce the number of false positives generated by the offline network and further improve the detection performance. Extensive experiments on the dataset of MICCAI 2015 Challenge on Polyp Detection demonstrated the better performance of our method when compared with other competitors.

  8. Improved lossless intra coding for next generation video coding

    NASA Astrophysics Data System (ADS)

    Vanam, Rahul; He, Yuwen; Ye, Yan

    2016-09-01

    Recently, there have been efforts by the ITU-T VCEG and ISO/IEC MPEG to further improve the compression performance of the High Efficiency Video Coding (HEVC) standard for developing a potential next generation video coding standard. The exploratory codec software of this potential standard includes new coding tools for inter and intra coding. In this paper, we present a new intra prediction mode for lossless intra coding. Our new intra mode derives a prediction filter for each input pixel using its neighboring reconstructed pixels, and applies this filter to the nearest neighboring reconstructed pixels to generate a prediction pixel. The proposed intra mode is demonstrated to improve the performance of the exploratory software for lossless intra coding, yielding a maximum and average bitrate savings of 4.4% and 2.11%, respectively.

  9. Subjective evaluation of mobile 3D video content: depth range versus compression artifacts

    NASA Astrophysics Data System (ADS)

    Jumisko-Pyykkö, Satu; Haustola, Tomi; Boev, Atanas; Gotchev, Atanas

    2011-02-01

    Mobile 3D television is a new form of media experience, which combines the freedom of mobility with the greater realism of presenting visual scenes in 3D. Achieving this combination is a challenging task as greater viewing experience has to be achieved with the limited resources of the mobile delivery channel such as limited bandwidth and power constrained handheld player. This challenge sets need for tight optimization of the overall mobile 3DTV system. Presence of depth and compression artifacts in the played 3D video are two major factors that influence viewer's subjective quality of experience and satisfaction. The primary goal of this study has been to examine the influence of varying depth and compression artifacts on the subjective quality of experience for mobile 3D video content. In addition, the influence of the studied variables on simulator sickness symptoms has been studied and vocabulary-based descriptive quality of experience has been conducted for a sub-set of variables in order to understand the perceptual characteristics in detail. In the experiment, 30 participants have evaluated the overall quality of different 3D video contents with varying depth ranges and compressed with varying quantization parameters. The test video content has been presented on a portable autostereoscopic LCD display with horizontal double density pixel arrangement. The results of the psychometric study indicate that compression artifacts are a dominant factor determining the quality of experience compared to varying depth range. More specifically, contents with strong compression has been rejected by the viewers and deemed unacceptable. The results of descriptive study confirm the dominance of visible spatial artifacts along the added value of depth for artifact-free content. The level of visual discomfort has been determined as not offending.

  10. Numerical modelling of gravel unconstrained flow experiments with the DAN3D and RASH3D codes

    NASA Astrophysics Data System (ADS)

    Sauthier, Claire; Pirulli, Marina; Pisani, Gabriele; Scavia, Claudio; Labiouse, Vincent

    2015-12-01

    Landslide continuum dynamic models have improved considerably in the last years, but a consensus on the best method of calibrating the input resistance parameter values for predictive analyses has not yet emerged. In the present paper, numerical simulations of a series of laboratory experiments performed at the Laboratory for Rock Mechanics of the EPF Lausanne were undertaken with the RASH3D and DAN3D numerical codes. They aimed at analysing the possibility to use calibrated ranges of parameters (1) in a code different from that they were obtained from and (2) to simulate potential-events made of a material with the same characteristics as back-analysed past-events, but involving a different volume and propagation path. For this purpose, one of the four benchmark laboratory tests was used as past-event to calibrate the dynamic basal friction angle assuming a Coulomb-type behaviour of the sliding mass, and this back-analysed value was then used to simulate the three other experiments, assumed as potential-events. The computational findings show good correspondence with experimental results in terms of characteristics of the final deposits (i.e., runout, length and width). Furthermore, the obtained best fit values of the dynamic basal friction angle for the two codes turn out to be close to each other and within the range of values measured with pseudo-dynamic tilting tests.

  11. ROAR: A 3-D tethered rocket simulation code

    SciTech Connect

    York, A.R. II; Ludwigsen, J.S.

    1992-04-01

    A high-velocity impact testing technique, utilizing a tethered rocket, is being developed at Sandia National Laboratories. The technique involves tethering a rocket assembly to a pivot location and flying it in a semicircular trajectory to deliver the rocket and payload to an impact target location. Integral to developing this testing technique is the parallel development of accurate simulation models. An operational computer code, called ROAR (Rocket-on-a-Rope), has been developed to simulate the three-dimensional transient dynamic behavior of the tether and motor/payload assembly. This report presents a discussion of the parameters modeled, the governing set of equations, the through-time integration scheme, and the input required to set up a model. Also included is a sample problem and a comparison with experimental results.

  12. A Gaussian process guided particle filter for tracking 3D human pose in video.

    PubMed

    Sedai, Suman; Bennamoun, Mohammed; Huynh, Du Q

    2013-11-01

    In this paper, we propose a hybrid method that combines Gaussian process learning, a particle filter, and annealing to track the 3D pose of a human subject in video sequences. Our approach, which we refer to as annealed Gaussian process guided particle filter, comprises two steps. In the training step, we use a supervised learning method to train a Gaussian process regressor that takes the silhouette descriptor as an input and produces multiple output poses modeled by a mixture of Gaussian distributions. In the tracking step, the output pose distributions from the Gaussian process regression are combined with the annealed particle filter to track the 3D pose in each frame of the video sequence. Our experiments show that the proposed method does not require initialization and does not lose tracking of the pose. We compare our approach with a standard annealed particle filter using the HumanEva-I dataset and with other state of the art approaches using the HumanEva-II dataset. The evaluation results show that our approach can successfully track the 3D human pose over long video sequences and give more accurate pose tracking results than the annealed particle filter.

  13. PEGASUS. 3D Direct Simulation Monte Carlo Code Which Solves for Geometrics

    SciTech Connect

    Bartel, T.J.

    1998-12-01

    Pegasus is a 3D Direct Simulation Monte Carlo Code which solves for geometries which can be represented by bodies of revolution. Included are all the surface chemistry enhancements in the 2D code Icarus as well as a real vacuum pump model. The code includes multiple species transport.

  14. 3D Direct Simulation Monte Carlo Code Which Solves for Geometrics

    SciTech Connect

    Bartel, Timothy J.

    1998-01-13

    Pegasus is a 3D Direct Simulation Monte Carlo Code which solves for geometries which can be represented by bodies of revolution. Included are all the surface chemistry enhancements in the 2D code Icarus as well as a real vacuum pump model. The code includes multiple species transport.

  15. EM modeling for GPIR using 3D FDTD modeling codes

    SciTech Connect

    Nelson, S.D.

    1994-10-01

    An analysis of the one-, two-, and three-dimensional electrical characteristics of structural cement and concrete is presented. This work connects experimental efforts in characterizing cement and concrete in the frequency and time domains with the Finite Difference Time Domain (FDTD) modeling efforts of these substances. These efforts include Electromagnetic (EM) modeling of simple lossless homogeneous materials with aggregate and targets and the modeling dispersive and lossy materials with aggregate and complex target geometries for Ground Penetrating Imaging Radar (GPIR). Two- and three-dimensional FDTD codes (developed at LLNL) where used for the modeling efforts. Purpose of the experimental and modeling efforts is to gain knowledge about the electrical properties of concrete typically used in the construction industry for bridges and other load bearing structures. The goal is to optimize the performance of a high-sample-rate impulse radar and data acquisition system and to design an antenna system to match the characteristics of this material. Results show agreement to within 2 dB of the amplitudes of the experimental and modeled data while the frequency peaks correlate to within 10% the differences being due to the unknown exact nature of the aggregate placement.

  16. 3D video analysis of the novel object recognition test in rats.

    PubMed

    Matsumoto, Jumpei; Uehara, Takashi; Urakawa, Susumu; Takamura, Yusaku; Sumiyoshi, Tomiki; Suzuki, Michio; Ono, Taketoshi; Nishijo, Hisao

    2014-10-01

    The novel object recognition (NOR) test has been widely used to test memory function. We developed a 3D computerized video analysis system that estimates nose contact with an object in Long Evans rats to analyze object exploration during NOR tests. The results indicate that the 3D system reproducibly and accurately scores the NOR test. Furthermore, the 3D system captures a 3D trajectory of the nose during object exploration, enabling detailed analyses of spatiotemporal patterns of object exploration. The 3D trajectory analysis revealed a specific pattern of object exploration in the sample phase of the NOR test: normal rats first explored the lower parts of objects and then gradually explored the upper parts. A systematic injection of MK-801 suppressed changes in these exploration patterns. The results, along with those of previous studies, suggest that the changes in the exploration patterns reflect neophobia to a novel object and/or changes from spatial learning to object learning. These results demonstrate that the 3D tracking system is useful not only for detailed scoring of animal behaviors but also for investigation of characteristic spatiotemporal patterns of object exploration. The system has the potential to facilitate future investigation of neural mechanisms underlying object exploration that result from dynamic and complex brain activity.

  17. Streaming video-based 3D reconstruction method compatible with existing monoscopic and stereoscopic endoscopy systems

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; van der Mark, Wannes; Eendebak, Pieter T.; Landsmeer, Sander H.; van Eekeren, Adam W. M.; ter Haar, Frank B.; Wieringa, F. Pieter; van Basten, Jean-Paul

    2012-06-01

    Compared to open surgery, minimal invasive surgery offers reduced trauma and faster recovery. However, lack of direct view limits space perception. Stereo-endoscopy improves depth perception, but is still restricted to the direct endoscopic field-of-view. We describe a novel technology that reconstructs 3D-panoramas from endoscopic video streams providing a much wider cumulative overview. The method is compatible with any endoscope. We demonstrate that it is possible to generate photorealistic 3D-environments from mono- and stereoscopic endoscopy. The resulting 3D-reconstructions can be directly applied in simulators and e-learning. Extended to real-time processing, the method looks promising for telesurgery or other remote vision-guided tasks.

  18. Video lensfree microscopy of 2D and 3D culture of cells

    NASA Astrophysics Data System (ADS)

    Allier, C. P.; Vinjimore Kesavan, S.; Coutard, J.-G.; Cioni, O.; Momey, F.; Navarro, F.; Menneteau, M.; Chalmond, B.; Obeid, P.; Haguet, V.; David-Watine, B.; Dubrulle, N.; Shorte, S.; van der Sanden, B.; Di Natale, C.; Hamard, L.; Wion, D.; Dolega, M. E.; Picollet-D'hahan, N.; Gidrol, X.; Dinten, J.-M.

    2014-03-01

    Innovative imaging methods are continuously developed to investigate the function of biological systems at the microscopic scale. As an alternative to advanced cell microscopy techniques, we are developing lensfree video microscopy that opens new ranges of capabilities, in particular at the mesoscopic level. Lensfree video microscopy allows the observation of a cell culture in an incubator over a very large field of view (24 mm2) for extended periods of time. As a result, a large set of comprehensive data can be gathered with strong statistics, both in space and time. Video lensfree microscopy can capture images of cells cultured in various physical environments. We emphasize on two different case studies: the quantitative analysis of the spontaneous network formation of HUVEC endothelial cells, and by coupling lensfree microscopy with 3D cell culture in the study of epithelial tissue morphogenesis. In summary, we demonstrate that lensfree video microscopy is a powerful tool to conduct cell assays in 2D and 3D culture experiments. The applications are in the realms of fundamental biology, tissue regeneration, drug development and toxicology studies.

  19. Joint source/channel coding for prioritized wireless transmission of multiple 3-D regions of interest in 3-D medical imaging data.

    PubMed

    Sanchez, V

    2013-02-01

    This paper presents a 3-D medical image coding method featuring two major improvements to previous work on 3-D region of interest (RoI) coding for telemedicine applications. Namely, 1) a data prioritization scheme that allows coding of multiple 3-D-RoIs; and 2) a joint/source channel coding scheme that allows prioritized transmission of multiple 3-D-RoIs over wireless channels. The method, which is based on the 3-D integer wavelet transform and embedded block coding with optimized truncation with 3-D context modeling, generates scalable and error-resilient bit streams with 3-D-RoI decoding capabilities. Coding of multiple 3-D-RoIs is attained by prioritizing the wavelet-transformed data according to a Gaussian mixed distribution, whereas error resiliency is attained by employing the error correction capabilities of rate-compatible punctured turbo codes. The robustness of the proposed method is evaluated for transmission of real 3-D medical images over Rayleigh-fading channels with a priori knowledge of the channel condition. Evaluation results show that the proposed coding method provides a superior performance compared to equal error protection and unequal error protection techniques.

  20. FPGA Implementation of Optimal 3D-Integer DCT Structure for Video Compression

    PubMed Central

    Jacob, J. Augustin; Kumar, N. Senthil

    2015-01-01

    A novel optimal structure for implementing 3D-integer discrete cosine transform (DCT) is presented by analyzing various integer approximation methods. The integer set with reduced mean squared error (MSE) and high coding efficiency are considered for implementation in FPGA. The proposed method proves that the least resources are utilized for the integer set that has shorter bit values. Optimal 3D-integer DCT structure is determined by analyzing the MSE, power dissipation, coding efficiency, and hardware complexity of different integer sets. The experimental results reveal that direct method of computing the 3D-integer DCT using the integer set [10, 9, 6, 2, 3, 1, 1] performs better when compared to other integer sets in terms of resource utilization and power dissipation. PMID:26601120

  1. FPGA Implementation of Optimal 3D-Integer DCT Structure for Video Compression.

    PubMed

    Jacob, J Augustin; Kumar, N Senthil

    2015-01-01

    A novel optimal structure for implementing 3D-integer discrete cosine transform (DCT) is presented by analyzing various integer approximation methods. The integer set with reduced mean squared error (MSE) and high coding efficiency are considered for implementation in FPGA. The proposed method proves that the least resources are utilized for the integer set that has shorter bit values. Optimal 3D-integer DCT structure is determined by analyzing the MSE, power dissipation, coding efficiency, and hardware complexity of different integer sets. The experimental results reveal that direct method of computing the 3D-integer DCT using the integer set [10, 9, 6, 2, 3, 1, 1] performs better when compared to other integer sets in terms of resource utilization and power dissipation.

  2. Fast prediction algorithm for multiview video coding

    NASA Astrophysics Data System (ADS)

    Abdelazim, Abdelrahman; Mein, Stephen James; Varley, Martin Roy; Ait-Boudaoud, Djamel

    2013-03-01

    The H.264/multiview video coding (MVC) standard has been developed to enable efficient coding for three-dimensional and multiple viewpoint video sequences. The inter-view statistical dependencies are utilized and an inter-view prediction is employed to provide more efficient coding; however, this increases the overall encoding complexity. Motion homogeneity is exploited here to selectively enable inter-view prediction, and to reduce complexity in the motion estimation (ME) and the mode selection processes. This has been accomplished by defining situations that relate macro-blocks' motion characteristics to the mode selection and the inter-view prediction processes. When comparing the proposed algorithm to the H.264/MVC reference software and other recent work, the experimental results demonstrate a significant reduction in ME time while maintaining similar rate-distortion performance.

  3. A View to the Future: A Novel Approach for 3D-3D Superimposition and Quantification of Differences for Identification from Next-Generation Video Surveillance Systems.

    PubMed

    Gibelli, Daniele; De Angelis, Danilo; Poppa, Pasquale; Sforza, Chiarella; Cattaneo, Cristina

    2017-03-01

    Techniques of 2D-3D superimposition are widely used in cases of personal identification from video surveillance systems. However, the progressive improvement of 3D image acquisition technology will enable operators to perform also 3D-3D facial superimposition. This study aims at analyzing the possible applications of 3D-3D superimposition to personal identification, although from a theoretical point of view. Twenty subjects underwent a facial 3D scan by stereophotogrammetry twice at different time periods. Scans were superimposed two by two according to nine landmarks, and root-mean-square (RMS) value of point-to-point distances was calculated. When the two superimposed models belonged to the same individual, RMS value was 2.10 mm, while it was 4.47 mm in mismatches with a statistically significant difference (p < 0.0001). This experiment shows the potential of 3D-3D superimposition: Further studies are needed to ascertain technical limits which may occur in practice and to improve methods useful in the forensic practice.

  4. Unbalanced multiple description wavelet coding for scalable video transmission

    NASA Astrophysics Data System (ADS)

    Choupani, Roya; Wong, Stephan; Tolun, Mehmet

    2012-10-01

    Scalable video coding and multiple description coding are the two different adaptation schemes for video transmission over heterogeneous and best-effort networks such as the Internet. We propose a new method to encode video for unreliable networks with rate adaptation capability. Our proposed method groups three dimensional discrete wavelet transform coefficients in different descriptions and applies a modified embedded zero tree data for rate adaptation. The proposed method optimizes the bit-rates of the descriptions with respect to the channel bit rates and the maximum acceptable distortion. The experimental results in the presence of one description loss indicate that on average the videos at the rate of 1000 Kbit/s are reconstructed with Y-component of peak signal to noise ratio (Y-PSNR) value of 36.2 dB. The dynamic allocation of descriptions to the network channels is optimized for rate distortion minimization. The improvement in term of Y-PSNR achieved by rate distortion optimization has been between 0.7 and 5.3 dB in different bit rates.

  5. ROI-preserving 3D video compression method utilizing depth information

    NASA Astrophysics Data System (ADS)

    Ti, Chunli; Xu, Guodong; Guan, Yudong; Teng, Yidan

    2015-09-01

    Efficiently transmitting the extra information of three dimensional (3D) video is becoming a key issue of the development of 3DTV. 2D plus depth format not only occupies the smaller bandwidth and is compatible transmission under the condition of the existing channel, but also can provide technique support for advanced 3D video compression in some extend. This paper proposes an ROI-preserving compression scheme to further improve the visual quality at a limited bit rate. According to the connection between the focus of Human Visual System (HVS) and depth information, region of interest (ROI) can be automatically selected via depth map progressing. The main improvement from common method is that a meanshift based segmentation is executed to the depth map before foreground ROI selection to keep the integrity of scene. Besides, the sensitive areas along the edges are also protected. The Spatio-temporal filtering adapting to H.264 is used to the non-ROI of both 2D video and depth map before compression. Experiments indicate that, the ROI extracted by this method is more undamaged and according with subjective feeling, and the proposed method can keep the key high-frequency information more effectively while the bit rate is reduced.

  6. 3-D field computation: The near-triumph of commerical codes

    SciTech Connect

    Turner, L.R.

    1995-07-01

    In recent years, more and more of those who design and analyze magnets and other devices are using commercial codes rather than developing their own. This paper considers the commercial codes and the features available with them. Other recent trends with 3-D field computation include parallel computation and visualization methods such as virtual reality systems.

  7. Lossy to lossless object-based coding of 3-D MRI data.

    PubMed

    Menegaz, Gloria; Thiran, Jean-Philippe

    2002-01-01

    We propose a fully three-dimensional (3-D) object-based coding system exploiting the diagnostic relevance of the different regions of the volumetric data for rate allocation. The data are first decorrelated via a 3-D discrete wavelet transform. The implementation via the lifting steps scheme allows to map integer-to-integer values, enabling lossless coding, and facilitates the definition of the object-based inverse transform. The coding process assigns disjoint segments of the bitstream to the different objects, which can be independently accessed and reconstructed at any up-to-lossless quality. Two fully 3-D coding strategies are considered: embedded zerotree coding (EZW-3D) and multidimensional layered zero coding (MLZC), both generalized for region of interest (ROI)-based processing. In order to avoid artifacts along region boundaries, some extra coefficients must be encoded for each object. This gives rise to an overheading of the bitstream with respect to the case where the volume is encoded as a whole. The amount of such extra information depends on both the filter length and the decomposition depth. The system is characterized on a set of head magnetic resonance images. Results show that MLZC and EZW-3D have competitive performances. In particular, the best MLZC mode outperforms the others state-of-the-art techniques on one of the datasets for which results are available in the literature.

  8. Cross modality registration of video and magnetic tracker data for 3D appearance and structure modeling

    NASA Astrophysics Data System (ADS)

    Sargent, Dusty; Chen, Chao-I.; Wang, Yuan-Fang

    2010-02-01

    The paper reports a fully-automated, cross-modality sensor data registration scheme between video and magnetic tracker data. This registration scheme is intended for use in computerized imaging systems to model the appearance, structure, and dimension of human anatomy in three dimensions (3D) from endoscopic videos, particularly colonoscopic videos, for cancer research and clinical practices. The proposed cross-modality calibration procedure operates this way: Before a colonoscopic procedure, the surgeon inserts a magnetic tracker into the working channel of the endoscope or otherwise fixes the tracker's position on the scope. The surgeon then maneuvers the scope-tracker assembly to view a checkerboard calibration pattern from a few different viewpoints for a few seconds. The calibration procedure is then completed, and the relative pose (translation and rotation) between the reference frames of the magnetic tracker and the scope is determined. During the colonoscopic procedure, the readings from the magnetic tracker are used to automatically deduce the pose (both position and orientation) of the scope's reference frame over time, without complicated image analysis. Knowing the scope movement over time then allows us to infer the 3D appearance and structure of the organs and tissues in the scene. While there are other well-established mechanisms for inferring the movement of the camera (scope) from images, they are often sensitive to mistakes in image analysis, error accumulation, and structure deformation. The proposed method using a magnetic tracker to establish the camera motion parameters thus provides a robust and efficient alternative for 3D model construction. Furthermore, the calibration procedure does not require special training nor use expensive calibration equipment (except for a camera calibration pattern-a checkerboard pattern-that can be printed on any laser or inkjet printer).

  9. Seepage and Piping through Levees and Dikes using 2D and 3D Modeling Codes

    DTIC Science & Technology

    2016-06-01

    Modeling Codes Co as ta l a nd H yd ra ul ic s La bo ra to ry Hwai-Ping Cheng, Stephen M. England, and Clarissa M. Murray June 2016...Flood & Coastal Storm Damage Reduction Program ERDC/CHL TR-16-6 June 2016 Seepage and Piping through Levees and Dikes Using 2D and 3D Modeling Codes ...TYPE Final Report 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE Seepage and Piping through Levees and Dikes using 2D and 3D Modeling Codes

  10. Nonintrusive viewpoint tracking for 3D for perception in smart video conference

    NASA Astrophysics Data System (ADS)

    Desurmont, Xavier; Martinez-Ponte, Isabel; Meessen, Jerome; Delaigle, Jean-François

    2006-02-01

    Globalisation of people's interaction in the industrial world and ecological cost of transport make video-conference an interesting solution for collaborative work. However, the lack of immersive perception makes video-conference not appealing. TIFANIS tele-immersion system was conceived to let users interact as if they were physically together. In this paper, we focus on an important feature of the immersive system: the automatic tracking of the user's point of view in order to render correctly in his display the scene from the ther site. Viewpoint information has to be computed in a very short time and the detection system should be no intrusive, otherwise it would become cumbersome for the user, i.e. he would lose the feeling of "being there". The viewpoint detection system consists of several modules. First, an analysis module identifies and follows regions of interest (ROI) where faces are detected. We will show the cooperative approach between spatial detection and temporal tracking. Secondly, an eye detector finds the position of the eyes within faces. Then, the 3D positions of the eyes are deduced using stereoscopic images from a binocular camera. Finally, the 3D scene is rendered in real-time according to the new point of view.

  11. Adaptive down-sampling video coding

    NASA Astrophysics Data System (ADS)

    Wang, Ren-Jie; Chien, Ming-Chen; Chang, Pao-Chi

    2010-01-01

    Down-sampling coding, which sub-samples the image and encodes the smaller sized images, is one of the solutions to raise the image quality at insufficiently high rates. In this work, we propose an Adaptive Down-Sampling (ADS) coding for H.264/AVC. The overall system distortion can be analyzed as the sum of the down-sampling distortion and the coding distortion. The down-sampling distortion is mainly the loss of the high frequency components that is highly dependent of the spatial difference. The coding distortion can be derived from the classical Rate-Distortion theory. For a given rate and a video sequence, the optimum down-sampling resolution-ratio can be derived by utilizing the optimum theory toward minimizing the system distortion based on the models of the two distortions. This optimal resolution-ratio is used in both down-sampling and up-sampling processes in ADS coding scheme. As a result, the rate-distortion performance of ADS coding is always higher than the fixed ratio coding or H.264/AVC by 2 to 4 dB at low to medium rates.

  12. MOEMS-based time-of-flight camera for 3D video capturing

    NASA Astrophysics Data System (ADS)

    You, Jang-Woo; Park, Yong-Hwa; Cho, Yong-Chul; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Lee, Seung-Wan

    2013-03-01

    We suggest a Time-of-Flight (TOF) video camera capturing real-time depth images (a.k.a depth map), which are generated from the fast-modulated IR images utilizing a novel MOEMS modulator having switching speed of 20 MHz. In general, 3 or 4 independent IR (e.g. 850nm) images are required to generate a single frame of depth image. Captured video image of a moving object frequently shows motion drag between sequentially captured IR images, which results in so called `motion blur' problem even when the frame rate of depth image is fast (e.g. 30 to 60 Hz). We propose a novel `single shot' TOF 3D camera architecture generating a single depth image out of synchronized captured IR images. The imaging system constitutes of 2x2 imaging lens array, MOEMS optical shutters (modulator) placed on each lens aperture and a standard CMOS image sensor. The IR light reflected from object is modulated by optical shutters on the apertures of 2x2 lens array and then transmitted images are captured on the image sensor resulting in 2x2 sub-IR images. As a result, the depth image is generated with those simultaneously captured 4 independent sub-IR images, hence the motion blur problem is canceled. The resulting performance is very useful in the applications of 3D camera to a human-machine interaction device such as user interface of TV, monitor, or hand held devices and motion capturing of human body. In addition, we show that the presented 3D camera can be modified to capture color together with depth image simultaneously on `single shot' frame rate.

  13. Toward a 3D video format for auto-stereoscopic displays

    NASA Astrophysics Data System (ADS)

    Vetro, Anthony; Yea, Sehoon; Smolic, Aljoscha

    2008-08-01

    There has been increased momentum recently in the production of 3D content for cinema applications; for the most part, this has been limited to stereo content. There are also a variety of display technologies on the market that support 3DTV, each offering a different viewing experience and having different input requirements. More specifically, stereoscopic displays support stereo content and require glasses, while auto-stereoscopic displays avoid the need for glasses by rendering view-dependent stereo pairs for a multitude of viewing angles. To realize high quality auto-stereoscopic displays, multiple views of the video must either be provided as input to the display, or these views must be created locally at the display. The former approach has difficulties in that the production environment is typically limited to stereo, and transmission bandwidth for a large number of views is not likely to be available. This paper discusses an emerging 3D data format that enables the latter approach to be realized. A new framework for efficiently representing a 3D scene and enabling the reconstruction of an arbitrarily large number of views prior to rendering is introduced. Several design challenges are also highlighted through experimental results.

  14. Monitoring an eruption fissure in 3D: video recording, particle image velocimetry and dynamics

    NASA Astrophysics Data System (ADS)

    Witt, Tanja; Walter, Thomas R.

    2015-04-01

    The processes during an eruption are very complex. To get a better understanding several parameters are measured. One of the measured parameters is the velocity of particles and patterns, as ash and emitted magma, and of the volcano itself. The resulting velocity field provides insights into the dynamics of a vent. Here we test our algorithm for 3 dimensional velocity fields on videos of the second fissure eruption of Bárdarbunga 2014. There we acquired videos from lava fountains of the main fissure with 2 high speed cameras with small angles between the cameras. Additionally we test the algorithm on videos from the geyser Strokkur, where we had 3 cameras and larger angles between the cameras. The velocity is calculated by a correlation in the Fourier space of contiguous images. Considering that we only have the velocity field of the surface smaller angles result in a better resolution of the existing velocity field in the near field. For general movements also larger angles can be useful, e.g. to get the direction, height and velocity of eruption clouds. In summary, it can be stated that 3D velocimetry can be used for several application and with different setup due to the application.

  15. Multitasking the INS3D-LU code on the Cray Y-MP

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod; Yoon, Seokkwan

    1991-01-01

    This paper presents the results of multitasking the INS3D-LU code on eight processors. The code is a full Navier-Stokes solver for incompressible fluid in three dimensional generalized coordinates using a lower-upper symmetric-Gauss-Seidel implicit scheme. This code has been fully vectorized on oblique planes of sweep and parallelized using autotasking with some directives and minor modifications. The timing results for five grid sizes are presented and analyzed. The code has achieved a processing rate of over one Gflops.

  16. Graphics to H.264 video encoding for 3D scene representation and interaction on mobile devices using region of interest

    NASA Astrophysics Data System (ADS)

    Le, Minh Tuan; Nguyen, Congdu; Yoon, Dae-Il; Jung, Eun Ku; Jia, Jie; Kim, Hae-Kwang

    2007-12-01

    In this paper, we propose a method of 3D graphics to video encoding and streaming that are embedded into a remote interactive 3D visualization system for rapidly representing a 3D scene on mobile devices without having to download it from the server. In particular, a 3D graphics to video framework is presented that increases the visual quality of regions of interest (ROI) of the video by performing more bit allocation to ROI during H.264 video encoding. The ROI are identified by projection 3D objects to a 2D plane during rasterization. The system offers users to navigate the 3D scene and interact with objects of interests for querying their descriptions. We developed an adaptive media streaming server that can provide an adaptive video stream in term of object-based quality to the client according to the user's preferences and the variation of network bandwidth. Results show that by doing ROI mode selection, PSNR of test sample slightly change while visual quality of objects increases evidently.

  17. Overview of MPEG internet video coding

    NASA Astrophysics Data System (ADS)

    Wang, R. G.; Li, G.; Park, S.; Kim, J.; Huang, T.; Jang, E. S.; Gao, W.

    2015-09-01

    MPEG has produced standards that have provided the industry with the best video compression technologies. In order to address the diversified needs of the Internet, MPEG issued the Call for Proposals (CfP) for internet video coding in July, 2011. It is anticipated that any patent declaration associated with the Baseline Profile of this standard will indicate that the patent owner is prepared to grant a free of charge license to an unrestricted number of applicants on a worldwide, non-discriminatory basis and under other reasonable terms and conditions to make, use, and sell implementations of the Baseline Profile of this standard in accordance with the ITU-T/ITU-R/ISO/IEC Common Patent Policy. Three different codecs had responded to the CfP, which are WVC, VCB and IVC. WVC was proposed jointly by Apple, Cisco, Fraunhofer HHI, Magnum Semiconductor, Polycom and RIM etc. it's in fact AVC baseline. VCB was proposed by Google, and it's in fact VP8. IVC was proposed by several Universities (Peking University, Tsinghua University, Zhejiang University, Hanyang University and Korea Aerospace University etc.) and its coding tools was developed from Zero. In this paper, we give an overview of the coding tools in IVC, and evaluate its performance by comparing it with WVC, VCB and AVC High Profile.

  18. Transport analysis in toroidal helical plasmas using the integrated code: TASK3D

    NASA Astrophysics Data System (ADS)

    Wakasa, A.; Fukuyama, A.; Murakami, S.; Beidler, C. D.; Maassberg, H.; Yokoyama, M.; Sato, M.

    2009-11-01

    The integrated simulation code in helical plasmas, TASK3D, is being developed on the basis of an integrated modeling code for tokamak plasma, TASK. In helical systems, the neoclassical transport is one of the important issues in addition to the anomalous transport, because of strong temperature dependence of heat conductivity and an important role in determining the radial electric field. We have already constructed the neoclassical transport database in LHD, DGN/LHD. The mono-energetic diffusion coefficients are evaluated based on the Monte Carlo method by DCOM code and the mono-energetic diffusion coefficients database is constructed using a neural network technique. Also we apply GSRAKE code, which solves the ripple-averaged drift kinetic equation, to obtain transport coefficients in highly collisionless regime. We have newly incorporated the DGN/LHD module into TASK3D. We will present several results of transport simulation in typical LHD plasmas.

  19. Analysis of EEG signals regularity in adults during video game play in 2D and 3D.

    PubMed

    Khairuddin, Hamizah R; Malik, Aamir S; Mumtaz, Wajid; Kamel, Nidal; Xia, Likun

    2013-01-01

    Video games have long been part of the entertainment industry. Nonetheless, it is not well known how video games can affect us with the advancement of 3D technology. The purpose of this study is to investigate the EEG signals regularity when playing video games in 2D and 3D modes. A total of 29 healthy subjects (24 male, 5 female) with mean age of 21.79 (1.63) years participated. Subjects were asked to play a car racing video game in three different modes (2D, 3D passive and 3D active). In 3D passive mode, subjects needed to wear a passive polarized glasses (cinema type) while for 3D active, an active shutter glasses was used. Scalp EEG data was recorded during game play using 19-channel EEG machine and linked ear was used as reference. After data were pre-processed, the signal irregularity for all conditions was computed. Two parameters were used to measure signal complexity for time series data: i) Hjorth-Complexity and ii) Composite Permutation Entropy Index (CPEI). Based on these two parameters, our results showed that the complexity level increased from eyes closed to eyes open condition; and further increased in the case of 3D as compared to 2D game play.

  20. Rendering-oriented multiview video coding based on chrominance information reconstruction

    NASA Astrophysics Data System (ADS)

    Shao, Feng; Yu, Mei; Jiang, Gangyi; Zhang, Zhaoyang

    2010-05-01

    Three-dimensional (3-D) video systems are expected to be a next-generation visual application. Since multiview video for 3-D video systems is composed of color and associated depth information, its huge requirement for data storage and transmission is an important problem. We propose a rendering-oriented multiview video coding (MVC) method based on chrominance information reconstruction that incorporates the rendering technique into the MVC process. The proposed method discards certain chrominance information to reduce bitrates, and performs reasonable bitrate allocation between color and depth videos. At the decoder, a chrominance reconstruction algorithm is presented to achieve accurate reconstruction by warping the neighboring views and colorizing the luminance-only pixels. Experimental results show that the proposed method can save nearly 20% on bitrates against the results without discarding the chrominance information. Moreover, under a fixed bitrate budget, the proposed method can greatly improve the rendering quality.

  1. 3D Neutron Transport PWR Full-core Calculation with RMC code

    NASA Astrophysics Data System (ADS)

    Qiu, Yishu; She, Ding; Fan, Xiao; Wang, Kan; Li, Zeguang; Liang, Jingang; Leroyer, Hadrien

    2014-06-01

    Nowadays, there are more and more interests in the use of Monte Carlo codes to calculate the detailed power density distributions in full-core reactors. With the Inspur TS1000 HPC Server of Tsinghua University, several calculations have been done based on the EDF 3D Neutron Transport PWR Full-core benchmark through large-scale parallelism. To investigate and compare the results of the deterministic method and Monte Carlo method, EDF R&D and Department of Engineering Physics of Tsinghua University are having a collaboration to make code to code verification. So in this paper, two codes are used. One is the code COCAGNE developed by the EDF R&D, a deterministic core code, and the other is the Monte Carlo code RMC developed by Department of Engineering Physics in Tsinghua University. First, the full-core model is described and a 26-group calculation was performed by these two codes using the same 26-group cross-section library provided by EDF R&D. Then the parallel and tally performance of RMC is discussed. RMC employs a novel algorithm which can cut down most of the communications. It can be seen clearly that the speedup ratio almost linearly increases with the nodes. Furthermore the cell-mapping method applied by RMC consumes little time to tally even millions of cells. The results of the codes COCAGNE and RMC are compared in three ways. The results of these two codes agree well with each other. It can be concluded that both COCAGNE and RMC are able to provide 3D-transport solutions associated with detailed power density distributions calculation in PWR full-core reactors. Finally, to investigate how many histories are needed to obtain a given standard deviation for a full 3D solution, the non-symmetrized condensed 2-group fluxes of RMC are discussed.

  2. Turbomachinery Heat Transfer and Loss Modeling for 3D Navier-Stokes Codes

    NASA Technical Reports Server (NTRS)

    DeWitt, Kenneth; Ameri, Ali

    2005-01-01

    This report's contents focus on making use of NASA Glenn on-site computational facilities,to develop, validate, and apply models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes to enhance the capability to compute heat transfer and losses in turbomachiney.

  3. Users manual for the NASA Lewis three-dimensional ice accretion code (LEWICE 3D)

    NASA Technical Reports Server (NTRS)

    Bidwell, Colin S.; Potapczuk, Mark G.

    1993-01-01

    A description of the methodology, the algorithms, and the input and output data along with an example case for the NASA Lewis 3D ice accretion code (LEWICE3D) has been produced. The manual has been designed to help the user understand the capabilities, the methodologies, and the use of the code. The LEWICE3D code is a conglomeration of several codes for the purpose of calculating ice shapes on three-dimensional external surfaces. A three-dimensional external flow panel code is incorporated which has the capability of calculating flow about arbitrary 3D lifting and nonlifting bodies with external flow. A fourth order Runge-Kutta integration scheme is used to calculate arbitrary streamlines. An Adams type predictor-corrector trajectory integration scheme has been included to calculate arbitrary trajectories. Schemes for calculating tangent trajectories, collection efficiencies, and concentration factors for arbitrary regions of interest for single droplets or droplet distributions have been incorporated. A LEWICE 2D based heat transfer algorithm can be used to calculate ice accretions along surface streamlines. A geometry modification scheme is incorporated which calculates the new geometry based on the ice accretions generated at each section of interest. The three-dimensional ice accretion calculation is based on the LEWICE 2D calculation. Both codes calculate the flow, pressure distribution, and collection efficiency distribution along surface streamlines. For both codes the heat transfer calculation is divided into two regions, one above the stagnation point and one below the stagnation point, and solved for each region assuming a flat plate with pressure distribution. Water is assumed to follow the surface streamlines, hence starting at the stagnation zone any water that is not frozen out at a control volume is assumed to run back into the next control volume. After the amount of frozen water at each control volume has been calculated the geometry is modified by

  4. Implementation of a kappa-epsilon turbulence model to RPLUS3D code

    NASA Technical Reports Server (NTRS)

    Chitsomboon, Tawit

    1992-01-01

    The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.

  5. Local characterization of hindered Brownian motion by using digital video microscopy and 3D particle tracking

    SciTech Connect

    Dettmer, Simon L.; Keyser, Ulrich F.; Pagliara, Stefano

    2014-02-15

    In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of these methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces.

  6. Proceeding On : Parallelisation Of Critical Code Passages In PHOENIX/3D

    NASA Astrophysics Data System (ADS)

    Arkenberg, Mario; Wichert, Viktoria; Hauschildt, Peter H.

    2016-10-01

    Highly resolved state-of-the-art 3D atmosphere simulations will remain computationally extremely expensive for years to come. In addition to the need for more computing power, rethinking coding practices is necessary. We take a dual approach here, by introducing especially adapted, parallel numerical methods and correspondingly parallelising time critical code passages. In the following, we present our work on PHOENIX/3D.While parallelisation is generally worthwhile, it requires revision of time-consuming subroutines with respect to separability of localised data and variables in order to determine the optimal approach. Of course, the same applies to the code structure. The importance of this ongoing work can be showcased by recently derived benchmark results, which were generated utilis- ing MPI and OpenMP. Furthermore, the need for a careful and thorough choice of an adequate, machine dependent setup is discussed.

  7. A unified and efficient framework for court-net sports video analysis using 3D camera modeling

    NASA Astrophysics Data System (ADS)

    Han, Jungong; de With, Peter H. N.

    2007-01-01

    The extensive amount of video data stored on available media (hard and optical disks) necessitates video content analysis, which is a cornerstone for different user-friendly applications, such as, smart video retrieval and intelligent video summarization. This paper aims at finding a unified and efficient framework for court-net sports video analysis. We concentrate on techniques that are generally applicable for more than one sports type to come to a unified approach. To this end, our framework employs the concept of multi-level analysis, where a novel 3-D camera modeling is utilized to bridge the gap between the object-level and the scene-level analysis. The new 3-D camera modeling is based on collecting features points from two planes, which are perpendicular to each other, so that a true 3-D reference is obtained. Another important contribution is a new tracking algorithm for the objects (i.e. players). The algorithm can track up to four players simultaneously. The complete system contributes to summarization by various forms of information, of which the most important are the moving trajectory and real-speed of each player, as well as 3-D height information of objects and the semantic event segments in a game. We illustrate the performance of the proposed system by evaluating it for a variety of court-net sports videos containing badminton, tennis and volleyball, and we show that the feature detection performance is above 92% and events detection about 90%.

  8. RELAP5-3D Code Includes Athena Features and Models

    SciTech Connect

    Richard A. Riemke; Cliff B. Davis; Richard R. Schultz

    2006-07-01

    Version 2.3 of the RELAP5-3D computer program includes all features and models previously available only in the ATHENA version of the code. These include the addition of new working fluids (i.e., ammonia, blood, carbon dioxide, glycerol, helium, hydrogen, lead-bismuth, lithium, lithium-lead, nitrogen, potassium, sodium, and sodium-potassium) and a magnetohydrodynamic model that expands the capability of the code to model many more thermal-hydraulic systems. In addition to the new working fluids along with the standard working fluid water, one or more noncondensable gases (e.g., air, argon, carbon dioxide, carbon monoxide, helium, hydrogen, krypton, nitrogen, oxygen, sf6, xenon) can be specified as part of the vapor/gas phase of the working fluid. These noncondensable gases were in previous versions of RELAP5- 3D. Recently four molten salts have been added as working fluids to RELAP5-3D Version 2.4, which has had limited release. These molten salts will be in RELAP5-3D Version 2.5, which will have a general release like RELAP5-3D Version 2.3. Applications that use these new features and models are discussed in this paper.

  9. Development of Unsteady Aerodynamic and Aeroelastic Reduced-Order Models Using the FUN3D Code

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Vatsa, Veer N.; Biedron, Robert T.

    2009-01-01

    Recent significant improvements to the development of CFD-based unsteady aerodynamic reduced-order models (ROMs) are implemented into the FUN3D unstructured flow solver. These improvements include the simultaneous excitation of the structural modes of the CFD-based unsteady aerodynamic system via a single CFD solution, minimization of the error between the full CFD and the ROM unsteady aero- dynamic solution, and computation of a root locus plot of the aeroelastic ROM. Results are presented for a viscous version of the two-dimensional Benchmark Active Controls Technology (BACT) model and an inviscid version of the AGARD 445.6 aeroelastic wing using the FUN3D code.

  10. Status report on the 'Merging' of the Electron-Cloud Code POSINST with the 3-D Accelerator PIC CODE WARP

    SciTech Connect

    Vay, J.-L.; Furman, M.A.; Azevedo, A.W.; Cohen, R.H.; Friedman, A.; Grote, D.P.; Stoltz, P.H.

    2004-04-19

    We have integrated the electron-cloud code POSINST [1] with WARP [2]--a 3-D parallel Particle-In-Cell accelerator code developed for Heavy Ion Inertial Fusion--so that the two can interoperate. Both codes are run in the same process, communicate through a Python interpreter (already used in WARP), and share certain key arrays (so far, particle positions and velocities). Currently, POSINST provides primary and secondary sources of electrons, beam bunch kicks, a particle mover, and diagnostics. WARP provides the field solvers and diagnostics. Secondary emission routines are provided by the Tech-X package CMEE.

  11. Turbine Internal and Film Cooling Modeling For 3D Navier-Stokes Codes

    NASA Technical Reports Server (NTRS)

    DeWitt, Kenneth; Garg Vijay; Ameri, Ali

    2005-01-01

    The aim of this research project is to make use of NASA Glenn on-site computational facilities in order to develop, validate and apply aerodynamic, heat transfer, and turbine cooling models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes such as the Glenn-" code. Specific areas of effort include: Application of the Glenn-HT code to specific configurations made available under Turbine Based Combined Cycle (TBCC), and Ultra Efficient Engine Technology (UEET) projects. Validating the use of a multi-block code for the time accurate computation of the detailed flow and heat transfer of cooled turbine airfoils. The goal of the current research is to improve the predictive ability of the Glenn-HT code. This will enable one to design more efficient turbine components for both aviation and power generation. The models will be tested against specific configurations provided by NASA Glenn.

  12. An Efficient Hierarchical Video Coding Scheme Combining Visual Perception Characteristics

    PubMed Central

    Liu, Pengyu; Jia, Kebin

    2014-01-01

    Different visual perception characteristic saliencies are the key to constitute the low-complexity video coding framework. A hierarchical video coding scheme based on human visual systems (HVS) is proposed in this paper. The proposed scheme uses a joint video coding framework consisting of visual perception analysis layer (VPAL) and video coding layer (VCL). In VPAL, effective visual perception characteristics detection algorithm is proposed to achieve visual region of interest (VROI) based on the correlation between coding information (such as motion vector, prediction mode, etc.) and visual attention. Then, the interest priority setting for VROI according to visual perception characteristics is completed. In VCL, the optional encoding method is developed utilizing the visual interested priority setting results from VPAL. As a result, the proposed scheme achieves information reuse and complementary between visual perception analysis and video coding. Experimental results show that the proposed hierarchical video coding scheme effectively alleviates the contradiction between complexity and accuracy. Compared with H.264/AVC (JM17.0), the proposed scheme reduces 80% video coding time approximately and maintains a good video image quality as well. It improves video coding performance significantly. PMID:24959623

  13. 3D modeling of architectural objects from video data obtained with the fixed focal length lens geometry

    NASA Astrophysics Data System (ADS)

    Deliś, Paulina; Kędzierski, Michał; Fryśkowska, Anna; Wilińska, Michalina

    2013-12-01

    The article describes the process of creating 3D models of architectural objects on the basis of video images, which had been acquired by a Sony NEX-VG10E fixed focal length video camera. It was assumed, that based on video and Terrestrial Laser Scanning data it is possible to develop 3D models of architectural objects. The acquisition of video data was preceded by the calibration of video camera. The process of creating 3D models from video data involves the following steps: video frames selection for the orientation process, orientation of video frames using points with known coordinates from Terrestrial Laser Scanning (TLS), generating a TIN model using automatic matching methods. The above objects have been measured with an impulse laser scanner, Leica ScanStation 2. Created 3D models of architectural objects were compared with 3D models of the same objects for which the self-calibration bundle adjustment process was performed. In this order a PhotoModeler Software was used. In order to assess the accuracy of the developed 3D models of architectural objects, points with known coordinates from Terrestrial Laser Scanning were used. To assess the accuracy a shortest distance method was used. Analysis of the accuracy showed that 3D models generated from video images differ by about 0.06 ÷ 0.13 m compared to TLS data. Artykuł zawiera opis procesu opracowania modeli 3D obiektów architektonicznych na podstawie obrazów wideo pozyskanych kamerą wideo Sony NEX-VG10E ze stałoogniskowym obiektywem. Przyjęto założenie, że na podstawie danych wideo i danych z naziemnego skaningu laserowego (NSL) możliwe jest opracowanie modeli 3D obiektów architektonicznych. Pozyskanie danych wideo zostało poprzedzone kalibracją kamery wideo. Model matematyczny kamery był oparty na rzucie perspektywicznym. Proces opracowania modeli 3D na podstawie danych wideo składał się z następujących etapów: wybór klatek wideo do procesu orientacji, orientacja klatek wideo na

  14. Assessing the performance of a parallel MATLAB-based 3D convection code

    NASA Astrophysics Data System (ADS)

    Kirkpatrick, G. J.; Hasenclever, J.; Phipps Morgan, J.; Shi, C.

    2008-12-01

    We are currently building 2D and 3D MATLAB-based parallel finite element codes for mantle convection and melting. The codes use the MATLAB implementation of core MPI commands (eg. Send, Receive, Broadcast) for message passing between computational subdomains. We have found that code development and algorithm testing are much faster in MATLAB than in our previous work coding in C or FORTRAN, this code was built from scratch with only 12 man-months of effort. The one extra cost w.r.t. C coding on a Beowulf cluster is the cost of the parallel MATLAB license for a >4core cluster. Here we present some preliminary results on the efficiency of MPI messaging in MATLAB on a small 4 machine, 16core, 32Gb RAM Intel Q6600 processor-based cluster. Our code implements fully parallelized preconditioned conjugate gradients with a multigrid preconditioner. Our parallel viscous flow solver is currently 20% slower for a 1,000,000 DOF problem on a single core in 2D as the direct solve MILAMIN MATLAB viscous flow solver. We have tested both continuous and discontinuous pressure formulations. We test with various configurations of network hardware, CPU speeds, and memory using our own and MATLAB's built in cluster profiler. So far we have only explored relatively small (up to 1.6GB RAM) test problems. We find that with our current code and Intel memory controller bandwidth limitations we can only get ~2.3 times performance out of 4 cores than 1 core per machine. Even for these small problems the code runs faster with message passing between 4 machines with one core each than 1 machine with 4 cores and internal messaging (1.29x slower), or 1 core (2.15x slower). It surprised us that for 2D ~1GB-sized problems with only 3 multigrid levels, the direct- solve on the coarsest mesh consumes comparable time to the iterative solve on the finest mesh - a penalty that is greatly reduced either by using a 4th multigrid level or by using an iterative solve at the coarsest grid level. We plan to

  15. Equation-of-State Test Suite for the DYNA3D Code

    SciTech Connect

    Benjamin, Russell D.

    2015-11-05

    This document describes the creation and implementation of a test suite for the Equationof- State models in the DYNA3D code. A customized input deck has been created for each model, as well as a script that extracts the relevant data from the high-speed edit file created by DYNA3D. Each equation-of-state model is broken apart and individual elements of the model are tested, as well as testing the entire model. The input deck for each model is described and the results of the tests are discussed. The intent of this work is to add this test suite to the validation suite presently used for DYNA3D.

  16. Peach Bottom 2 Turbine Trip Simulation Using TRAC-BF1/COS3D, a Best-Estimate Coupled 3-D Core and Thermal-Hydraulic Code System

    SciTech Connect

    Ui, Atsushi; Miyaji, Takamasa

    2004-10-15

    The best-estimate coupled three-dimensional (3-D) core and thermal-hydraulic code system TRAC-BF1/COS3D has been developed. COS3D, based on a modified one-group neutronic model, is a 3-D core simulator used for licensing analyses and core management of commercial boiling water reactor (BWR) plants in Japan. TRAC-BF1 is a plant simulator based on a two-fluid model. TRAC-BF1/COS3D is a coupled system of both codes, which are connected using a parallel computing tool. This code system was applied to the OECD/NRC BWR Turbine Trip Benchmark. Since the two-group cross-section tables are provided by the benchmark team, COS3D was modified to apply to this specification. Three best-estimate scenarios and four hypothetical scenarios were calculated using this code system. In the best-estimate scenario, the predicted core power with TRAC-BF1/COS3D is slightly underestimated compared with the measured data. The reason seems to be a slight difference in the core boundary conditions, that is, pressure changes and the core inlet flow distribution, because the peak in this analysis is sensitive to them. However, the results of this benchmark analysis show that TRAC-BF1/COS3D gives good precision for the prediction of the actual BWR transient behavior on the whole. Furthermore, the results with the modified one-group model and the two-group model were compared to verify the application of the modified one-group model to this benchmark. This comparison shows that the results of the modified one-group model are appropriate and sufficiently precise.

  17. IM3D: A parallel Monte Carlo code for efficient simulations of primary radiation displacements and damage in 3D geometry

    PubMed Central

    Li, Yong Gang; Yang, Yang; Short, Michael P.; Ding, Ze Jun; Zeng, Zhi; Li, Ju

    2015-01-01

    SRIM-like codes have limitations in describing general 3D geometries, for modeling radiation displacements and damage in nanostructured materials. A universal, computationally efficient and massively parallel 3D Monte Carlo code, IM3D, has been developed with excellent parallel scaling performance. IM3D is based on fast indexing of scattering integrals and the SRIM stopping power database, and allows the user a choice of Constructive Solid Geometry (CSG) or Finite Element Triangle Mesh (FETM) method for constructing 3D shapes and microstructures. For 2D films and multilayers, IM3D perfectly reproduces SRIM results, and can be ∼102 times faster in serial execution and > 104 times faster using parallel computation. For 3D problems, it provides a fast approach for analyzing the spatial distributions of primary displacements and defect generation under ion irradiation. Herein we also provide a detailed discussion of our open-source collision cascade physics engine, revealing the true meaning and limitations of the “Quick Kinchin-Pease” and “Full Cascades” options. The issues of femtosecond to picosecond timescales in defining displacement versus damage, the limitation of the displacements per atom (DPA) unit in quantifying radiation damage (such as inadequacy in quantifying degree of chemical mixing), are discussed. PMID:26658477

  18. A hybrid kinetic hot ion PIC module for the M3D-C1 Code

    NASA Astrophysics Data System (ADS)

    Breslau, J. A.; Ferraro, N.; Jardin, S. C.; Kalyanaraman, K.

    2016-10-01

    Building on the success of the original M3D code with the addition of efficient high-order, high-continuity finite elements and a fully implicit time advance making use of cutting-edge numerical techniques, M3D-C1 has become a flagship code for realistic time-dependent 3D MHD and two-fluid calculations of the nonlinear evolution of macroinstabilities in tokamak plasmas. It is therefore highly desirable to introduce to M3D-C1 one of the most-used features of its predecessor: the option to use a drift-kinetic delta- f PIC model for a minority population of energetic ions (representing, e.g., beam ions or fusion alpha particles) coupled with the usual finite element advance of the bulk ion and electron fluids through its pressure tensor. We describe the implementation of a module for this purpose using high-order-of-accuracy numerical integration and carefully tuned to take advantage of state-of-the-art multicore processing elements. Verification results for a toroidal Alfvén eigenmode test problem will be presented, along with a demonstration of favorable parallel scaling to large numbers of supercomputer nodes.

  19. Progress on accelerated calculation of 3D MHD equilibrium with the PIES code

    NASA Astrophysics Data System (ADS)

    Raburn, Daniel; Reiman, Allan; Monticello, Donald

    2016-10-01

    Continuing progress has been made in accelerating the 3D MHD equilibrium code, PIES, using an external numerical wrapper. The PIES code (Princeton Iterative Equilibrium Solver) is capable of calculating 3D MHD equilibria with islands. The numerical wrapper has been demonstrated to greatly improve the rate of convergence in numerous cases corresponding to equilibria in the TFTR device where magnetic islands are present; the numerical wrapper makes use of a Jacobian-free Newton-Krylov solver along with adaptive preconditioning and a sophisticated subspace-restricted Levenberg backtracking algorithm. The wrapper has recently been improved by automation which combines the preexisting backtracking algorithm with insights gained from the stability of the Picard algorithm traditionally used with PIES. Improved progress logging and stopping criteria have also been incorporated in to the numerical wrapper.

  20. Voxel-coding method for quantification of vascular structure from 3D images

    NASA Astrophysics Data System (ADS)

    Soltanian-Zadeh, Hamid; Shahrokni, Ali; Zoroofi, Reza A.

    2001-05-01

    This paper presents an image processing method for information extraction from 3D images of vasculature. It automates the study of vascular structures by extracting quantitative information such as skeleton, length, diameter, and vessel-to- tissue ratio for different vessels as well as their branches. Furthermore, it generates 3D visualization of vessels based on desired anatomical characteristics such as vessel diameter or 3D connectivity. Steps of the proposed approach are as follows. (1) Preprocessing, in which intensity adjustment, optimal thresholding, and median filtering are done. (2) 3D thinning, in which medial axis and skeleton of the vessels are found. (3) Branch labeling, in which different branches are identified and each voxel is assigned to the corresponding branch. (4) Quantitation, in which length of each branch is estimated, based on the number of voxels assigned to it, and its diameter is calculated using the medial axis direction. (5) Visualization, in which vascular structure is shown in 3D, using color coding and surface rendering methods. We have tested and evaluated the proposed algorithms using simulated images of multi-branch vessels and real confocal microscopic images of the vessels in rat brains. Experimental results illustrate performance of the methods and usefulness of the results for medical image analysis applications.

  1. Statistical and spatiotemporal correlation based low-complexity video coding for high-efficiency video coding

    NASA Astrophysics Data System (ADS)

    Shang, Xiwu; Wang, Guozhong; Fan, Tao; Li, Yan

    2015-03-01

    High-efficiency video coding (HEVC) is a new coding standard that adopts the quadtree splitting structure based on coding tree units instead of macroblocks, and can support more coding modes and more partitions. Although it can improve compression efficiency, the flexible quadtree block partition and mode selection result in high computational complexity in real-time applications. We propose a low-complexity video coding algorithm for HEVC by utilizing statistical correlation and spatiotemporal correlation, which consists of an early determination of SKIP mode (EDSM) method and an early termination of reference frame selection (ETRFS) method. Since there is a strong correlation for the rate distortion (RD) cost for the SKIP mode between adjacent frames, EDSM detects the SKIP mode according to the threshold derived from the former training frame. Meanwhile, ETRFS terminates the process of reference frame selection using the motion vector and reference frame information from neighboring blocks to skip unnecessary candidate frames. Experimental results demonstrate that the proposed method can achieve about 45.01% complexity reduction on average with a 1.11% BD-rate increase and 0.04 BD-PSNR decrease for random access. The complexity reduction, BD-rate increase, and BD-PSNR decrease for low delay are 46.16%, 0.99%, and 0.03, respectively.

  2. Miniature stereoscopic video system provides real-time 3D registration and image fusion for minimally invasive surgery

    NASA Astrophysics Data System (ADS)

    Yaron, Avi; Bar-Zohar, Meir; Horesh, Nadav

    2007-02-01

    Sophisticated surgeries require the integration of several medical imaging modalities, like MRI and CT, which are three-dimensional. Many efforts are invested in providing the surgeon with this information in an intuitive & easy to use manner. A notable development, made by Visionsense, enables the surgeon to visualize the scene in 3D using a miniature stereoscopic camera. It also provides real-time 3D measurements that allow registration of navigation systems as well as 3D imaging modalities, overlaying these images on the stereoscopic video image in real-time. The real-time MIS 'see through tissue' fusion solutions enable the development of new MIS procedures in various surgical segments, such as spine, abdomen, cardio-thoracic and brain. This paper describes 3D surface reconstruction and registration methods using Visionsense camera, as a step toward fully automated multi-modality 3D registration.

  3. User Guide for the R5EXEC Coupling Interface in the RELAP5-3D Code

    SciTech Connect

    Forsmann, J. Hope; Weaver, Walter L.

    2015-04-01

    This report describes the R5EXEC coupling interface in the RELAP5-3D computer code from the users perspective. The information in the report is intended for users who want to couple RELAP5-3D to other thermal-hydraulic, neutron kinetics, or control system simulation codes.

  4. The Transient 3-D Transport Coupled Code TORT-TD/ATTICA3D for High-Fidelity Pebble-Bed HTGR Analyses

    NASA Astrophysics Data System (ADS)

    Seubert, Armin; Sureda, Antonio; Lapins, Janis; Bader, Johannes; Laurien, Eckart

    2012-01-01

    This article describes the 3D discrete ordinates-based coupled code system TORT-TD/ATTICA3D that aims at steady state and transient analyses of pebble-bed high-temperature gas cooled reactors. In view of increasing computing power, the application of time-dependent neutron transport methods becomes feasible for best estimate evaluations of safety margins. The calculation capabilities of TORT-TD/ATTICA3D are presented along with the coupling approach, with focus on the time-dependent neutron transport features of TORT-TD. Results obtained for the OECD/NEA/NSC PBMR-400 benchmark demonstrate the transient capabilities of TORT-TD/ATTICA3D.

  5. Methods used in WARP3d, a three-dimensional PIC/accelerator code

    SciTech Connect

    Grote, D.P.; Friedman, A.; Haber, I.

    1997-02-28

    WARP-3d(1,2), a three-dimensional PIC/accelerator code, has been developed over several years and has played a major role in the design and analysis of space-charge dominated beam experiments being carried out by the heavy-ion fusion programs at LLNL and LBNL. Major features of the code will be reviewed, including: residence corrections which allow large timesteps to be taken, electrostatic field solution with subgrid scale resolution of internal conductor boundaries, and a beat beam algorithm. Emphasis will be placed on new features and capabilities of the code, which include: a port to parallel processing environments, space-charge limited injection, and the linking of runs covering different sections of an accelerator. Representative applications in which the new features and capabilities are used will be presented along with the important results.

  6. Spacecraft charging analysis with the implicit particle-in-cell code iPic3D

    SciTech Connect

    Deca, J.; Lapenta, G.; Marchand, R.; Markidis, S.

    2013-10-15

    We present the first results on the analysis of spacecraft charging with the implicit particle-in-cell code iPic3D, designed for running on massively parallel supercomputers. The numerical algorithm is presented, highlighting the implementation of the electrostatic solver and the immersed boundary algorithm; the latter which creates the possibility to handle complex spacecraft geometries. As a first step in the verification process, a comparison is made between the floating potential obtained with iPic3D and with Orbital Motion Limited theory for a spherical particle in a uniform stationary plasma. Second, the numerical model is verified for a CubeSat benchmark by comparing simulation results with those of PTetra for space environment conditions with increasing levels of complexity. In particular, we consider spacecraft charging from plasma particle collection, photoelectron and secondary electron emission. The influence of a background magnetic field on the floating potential profile near the spacecraft is also considered. Although the numerical approaches in iPic3D and PTetra are rather different, good agreement is found between the two models, raising the level of confidence in both codes to predict and evaluate the complex plasma environment around spacecraft.

  7. Representation and coding of large-scale 3D dynamic maps

    NASA Astrophysics Data System (ADS)

    Cohen, Robert A.; Tian, Dong; Krivokuća, Maja; Sugimoto, Kazuo; Vetro, Anthony; Wakimoto, Koji; Sekiguchi, Shunichi

    2016-09-01

    combined with depth and color measurements of the surrounding environment. Localization could be achieved with GPS, inertial measurement units (IMU), cameras, or combinations of these and other devices, while the depth measurements could be achieved with time-of-flight, radar or laser scanning systems. The resulting 3D maps, which are composed of 3D point clouds with various attributes, could be used for a variety of applications, including finding your way around indoor spaces, navigating vehicles around a city, space planning, topographical surveying or public surveying of infrastructure and roads, augmented reality, immersive online experiences, and much more. This paper discusses application requirements related to the representation and coding of large-scale 3D dynamic maps. In particular, we address requirements related to different types of acquisition environments, scalability in terms of progressive transmission and efficiently rendering different levels of details, as well as key attributes to be included in the representation. Additionally, an overview of recently developed coding techniques is presented, including an assessment of current performance. Finally, technical challenges and needs for future standardization are discussed.

  8. PRONTO3D users` instructions: A transient dynamic code for nonlinear structural analysis

    SciTech Connect

    Attaway, S.W.; Mello, F.J.; Heinstein, M.W.; Swegle, J.W.; Ratner, J.A.; Zadoks, R.I.

    1998-06-01

    This report provides an updated set of users` instructions for PRONTO3D. PRONTO3D is a three-dimensional, transient, solid dynamics code for analyzing large deformations of highly nonlinear materials subjected to extremely high strain rates. This Lagrangian finite element program uses an explicit time integration operator to integrate the equations of motion. Eight-node, uniform strain, hexahedral elements and four-node, quadrilateral, uniform strain shells are used in the finite element formulation. An adaptive time step control algorithm is used to improve stability and performance in plasticity problems. Hourglass distortions can be eliminated without disturbing the finite element solution using either the Flanagan-Belytschko hourglass control scheme or an assumed strain hourglass control scheme. All constitutive models in PRONTO3D are cast in an unrotated configuration defined using the rotation determined from the polar decomposition of the deformation gradient. A robust contact algorithm allows for the impact and interaction of deforming contact surfaces of quite general geometry. The Smooth Particle Hydrodynamics method has been embedded into PRONTO3D using the contact algorithm to couple it with the finite element method.

  9. FURN3D: A computer code for radiative heat transfer in pulverized coal furnaces

    SciTech Connect

    Ahluwalia, R.K.; Im, K.H.

    1992-08-01

    A computer code FURN3D has been developed for assessing the impact of burning different coals on heat absorption pattern in pulverized coal furnaces. The code is unique in its ability to conduct detailed spectral calculations of radiation transport in furnaces fully accounting for the size distributions of char, soot and ash particles, ash content, and ash composition. The code uses a hybrid technique of solving the three-dimensional radiation transport equation for absorbing, emitting and anisotropically scattering media. The technique achieves an optimal mix of computational speed and accuracy by combining the discrete ordinate method (S[sub 4]), modified differential approximation (MDA) and P, approximation in different range of optical thicknesses. The code uses spectroscopic data for estimating the absorption coefficients of participating gases C0[sub 2], H[sub 2]0 and CO. It invokes Mie theory for determining the extinction and scattering coefficients of combustion particulates. The optical constants of char, soot and ash are obtained from dispersion relations derived from reflectivity, transmissivity and extinction measurements. A control-volume formulation is adopted for determining the temperature field inside the furnace. A simple char burnout model is employed for estimating heat release and evolution of particle size distribution. The code is written in Fortran 77, has modular form, and is machine-independent. The computer memory required by the code depends upon the number of grid points specified and whether the transport calculations are performed on spectral or gray basis.

  10. FURN3D: A computer code for radiative heat transfer in pulverized coal furnaces

    SciTech Connect

    Ahluwalia, R.K.; Im, K.H.

    1992-08-01

    A computer code FURN3D has been developed for assessing the impact of burning different coals on heat absorption pattern in pulverized coal furnaces. The code is unique in its ability to conduct detailed spectral calculations of radiation transport in furnaces fully accounting for the size distributions of char, soot and ash particles, ash content, and ash composition. The code uses a hybrid technique of solving the three-dimensional radiation transport equation for absorbing, emitting and anisotropically scattering media. The technique achieves an optimal mix of computational speed and accuracy by combining the discrete ordinate method (S{sub 4}), modified differential approximation (MDA) and P, approximation in different range of optical thicknesses. The code uses spectroscopic data for estimating the absorption coefficients of participating gases C0{sub 2}, H{sub 2}0 and CO. It invokes Mie theory for determining the extinction and scattering coefficients of combustion particulates. The optical constants of char, soot and ash are obtained from dispersion relations derived from reflectivity, transmissivity and extinction measurements. A control-volume formulation is adopted for determining the temperature field inside the furnace. A simple char burnout model is employed for estimating heat release and evolution of particle size distribution. The code is written in Fortran 77, has modular form, and is machine-independent. The computer memory required by the code depends upon the number of grid points specified and whether the transport calculations are performed on spectral or gray basis.

  11. Development of a 3D CT-scanner using a cone beam and video-fluoroscopic system.

    PubMed

    Endo, M; Yoshida, K; Kamagata, N; Satoh, K; Okazaki, T; Hattori, Y; Kobayashi, S; Jimbo, M; Kusakabe, M; Tateno, Y

    1998-01-01

    We describe the design and implementation of a system that acquires three-dimensional (3D) data of high-contrast objects such as bone, lung, and blood vessels (enhanced by contrast agent). This 3D computed tomography (CT) system is based on a cone beam and video-fluoroscopic system and yields data that is amenable to 3D image processing. An X-ray tube and a large area two-dimensional detector were mounted on a single frame and rotated around objects in 12 seconds. The large area detector consisted of a fluorescent plate and a charge coupled device (CCD) video camera. While the X-ray tube was rotated around the object, a pulsed X-ray was generated (30 pulses per second) and 360 projected images were collected in a 12-second scan. A 256 x 256 x 256 matrix image was reconstructed using a high-speed parallel processor. Reconstruction required approximately 6 minutes. Two volunteers underwent scans of the head or chest. High-contrast objects such as bronchial, vascular, and mediastinal structures in the thorax, or bones and air cavities in the head were delineated in a "real" 3D format. Our 3D CT-scanner appears to produce data useful for clinical imaging and 3D image processing.

  12. Architecture of web services in the enhancement of real-time 3D video virtualization in cloud environment

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Wang, Qi; Alcaraz-Calero, Jose M.; Grecos, Christos

    2016-04-01

    This paper proposes a new approach to improving the application of 3D video rendering and streaming by jointly exploring and optimizing both cloud-based virtualization and web-based delivery. The proposed web service architecture firstly establishes a software virtualization layer based on QEMU (Quick Emulator), an open-source virtualization software that has been able to virtualize system components except for 3D rendering, which is still in its infancy. The architecture then explores the cloud environment to boost the speed of the rendering at the QEMU software virtualization layer. The capabilities and inherent limitations of Virgil 3D, which is one of the most advanced 3D virtual Graphics Processing Unit (GPU) available, are analyzed through benchmarking experiments and integrated into the architecture to further speed up the rendering. Experimental results are reported and analyzed to demonstrate the benefits of the proposed approach.

  13. The H.264/MPEG4 advanced video coding

    NASA Astrophysics Data System (ADS)

    Gromek, Artur

    2009-06-01

    H.264/MPEG4-AVC is the newest video coding standard recommended by International Telecommunication Union - Telecommunication Standardization Section (ITU-T) and the ISO/IEC Moving Picture Expert Group (MPEG). The H.264/MPEG4-AVC has recently become leading standard for generic audiovisual services, since deployment for digital television. Nowadays is commonly used in wide range of video application ranging like mobile services, videoconferencing, IPTV, HDTV, video storage and many more. In this article, author briefly describes the technology applied in the H.264/MPEG4-AVC video coding standard, the way of real-time implementation and the way of future development.

  14. An analysis of brightness as a factor in visual discomfort caused by watching stereoscopic 3D video

    NASA Astrophysics Data System (ADS)

    Kim, Yong-Woo; Kang, Hang-Bong

    2015-05-01

    Even though various research has examined the factors that cause visual discomfort in watching stereoscopic 3D video, the brightness factor has not been dealt with sufficiently. In this paper, we analyze visual discomfort under various illumination conditions by considering eye-blinking rate and saccadic eye movement. In addition, we measure the perceived depth before and after watching 3D stereoscopic video by using our own 3D depth measurement instruments. Our test sequences consist of six illumination conditions for background. The illumination is changed from bright to dark or vice-versa, while the illumination of the foreground object is constant. Our test procedure is as follows: First, the subjects are rested until a baseline of no visual discomfort is established. Then, the subjects answer six questions to check their subjective pre-stimulus discomfort level. Next, we measure perceived depth for each subject, and the subjects watch 30-minute stereoscopic 3D or 2D video clips in random order. We measured eye-blinking and saccadic movements of the subject using an eye-tracking device. Then, we measured perceived depth for each subject again to detect any changes in depth perception. We also checked the subject's post-stimulus discomfort level, and measured the perceived depth after a 40-minute post-experiment resting period to measure recovery levels. After 40 minutes, most subjects returned to normal levels of depth perception. From our experiments, we found that eye-blinking rates were higher with a dark to light video progression than vice-versa. Saccadic eye movements were a lower with a dark to light video progression than viceversa.

  15. Newly-Developed 3D GRMHD Code and its Application to Jet Formation

    NASA Technical Reports Server (NTRS)

    Mizuno, Y.; Nishikawa, K.-I.; Koide, S.; Hardee, P.; Fishman, G. J.

    2006-01-01

    We have developed a new three-dimensional general relativistic magnetohydrodynamic code by using a conservative, high-resolution shock-capturing scheme. The numerical fluxes are calculated using the HLL approximate Riemann solver scheme. The flux-interpolated constrained transport scheme is used to maintain a divergence-free magnetic field. We have performed various 1-dimensional test problems in both special and general relativity by using several reconstruction methods and found that the new 3D GRMHD code shows substantial improvements over our previous model. The . preliminary results show the jet formations from a geometrically thin accretion disk near a non-rotating and a rotating black hole. We will discuss the jet properties depended on the rotation of a black hole and the magnetic field strength.

  16. Implementation of the 3D edge plasma code EMC3-EIRENE on NSTX

    DOE PAGES

    Lore, J. D.; Canik, J. M.; Feng, Y.; ...

    2012-05-09

    The 3D edge transport code EMC3-EIRENE has been applied for the first time to the NSTX spherical tokamak. A new disconnected double null grid has been developed to allow the simulation of plasma where the radial separation of the inner and outer separatrix is less than characteristic widths (e.g. heat flux width) at the midplane. Modelling results are presented for both an axisymmetric case and a case where 3D magnetic field is applied in an n = 3 configuration. In the vacuum approximation, the perturbed field consists of a wide region of destroyed flux surfaces and helical lobes which aremore » a mixture of long and short connection length field lines formed by the separatrix manifolds. This structure is reflected in coupled 3D plasma fluid (EMC3) and kinetic neutral particle (EIRENE) simulations. The helical lobes extending inside of the unperturbed separatrix are filled in by hot plasma from the core. The intersection of the lobes with the divertor results in a striated flux footprint pattern on the target plates. As a result, profiles of divertor heat and particle fluxes are compared with experimental data, and possible sources of discrepancy are discussed.« less

  17. Recent Developments in the VISRAD 3-D Target Design and Radiation Simulation Code

    NASA Astrophysics Data System (ADS)

    Macfarlane, Joseph; Woodruff, P.; Golovkin, I.

    2011-10-01

    The 3-D view factor code VISRAD is widely used in designing HEDP experiments at major laser and pulsed-power facilities, including NIF, OMEGA, OMEGA-EP, ORION, Z, and PLX. It simulates target designs by generating a 3-D grid of surface elements, utilizing a variety of 3-D primitives and surface removal algorithms, and can be used to compute the radiation flux throughout the surface element grid by computing element-to-element view factors and solving power balance equations. Target set-up and beam pointing are facilitated by allowing users to specify positions and angular orientations using a variety of coordinates systems (e . g . , that of any laser beam, target component, or diagnostic port). Analytic modeling for laser beam spatial profiles for OMEGA DPPs and NIF CPPs is used to compute laser intensity profiles throughout the grid of surface elements. VISRAD includes a variety of user-friendly graphics for setting up targets and displaying results, can readily display views from any point in space, and can be used to generate image sequences for animations. We will discuss recent improvements to the software package and plans for future developments.

  18. Implementation of the 3D edge plasma code EMC3-EIRENE on NSTX

    SciTech Connect

    Lore, J. D.; Canik, J. M.; Feng, Y.; Ahn, J. -W.; Maingi, R.; Soukhanovskii, V.

    2012-05-09

    The 3D edge transport code EMC3-EIRENE has been applied for the first time to the NSTX spherical tokamak. A new disconnected double null grid has been developed to allow the simulation of plasma where the radial separation of the inner and outer separatrix is less than characteristic widths (e.g. heat flux width) at the midplane. Modelling results are presented for both an axisymmetric case and a case where 3D magnetic field is applied in an n = 3 configuration. In the vacuum approximation, the perturbed field consists of a wide region of destroyed flux surfaces and helical lobes which are a mixture of long and short connection length field lines formed by the separatrix manifolds. This structure is reflected in coupled 3D plasma fluid (EMC3) and kinetic neutral particle (EIRENE) simulations. The helical lobes extending inside of the unperturbed separatrix are filled in by hot plasma from the core. The intersection of the lobes with the divertor results in a striated flux footprint pattern on the target plates. As a result, profiles of divertor heat and particle fluxes are compared with experimental data, and possible sources of discrepancy are discussed.

  19. Embedded 3D shape measurement system based on a novel spatio-temporal coding method

    NASA Astrophysics Data System (ADS)

    Xu, Bin; Tian, Jindong; Tian, Yong; Li, Dong

    2016-11-01

    Structured light measurement has been wildly used since 1970s in industrial component detection, reverse engineering, 3D molding, robot navigation, medical and many other fields. In order to satisfy the demand for high speed, high precision and high resolution 3-D measurement for embedded system, a new patterns combining binary and gray coding principle in space are designed and projected onto the object surface orderly. Each pixel corresponds to the designed sequence of gray values in time - domain, which is treated as a feature vector. The unique gray vector is then dimensionally reduced to a scalar which could be used as characteristic information for binocular matching. In this method, the number of projected structured light patterns is reduced, and the time-consuming phase unwrapping in traditional phase shift methods is avoided. This algorithm is eventually implemented on DM3730 embedded system for 3-D measuring, which consists of an ARM and a DSP core and has a strong capability of digital signal processing. Experimental results demonstrated the feasibility of the proposed method.

  20. A semi-automatic 2D-to-3D video conversion with adaptive key-frame selection

    NASA Astrophysics Data System (ADS)

    Ju, Kuanyu; Xiong, Hongkai

    2014-11-01

    To compensate the deficit of 3D content, 2D to 3D video conversion (2D-to-3D) has recently attracted more attention from both industrial and academic communities. The semi-automatic 2D-to-3D conversion which estimates corresponding depth of non-key-frames through key-frames is more desirable owing to its advantage of balancing labor cost and 3D effects. The location of key-frames plays a role on quality of depth propagation. This paper proposes a semi-automatic 2D-to-3D scheme with adaptive key-frame selection to keep temporal continuity more reliable and reduce the depth propagation errors caused by occlusion. The potential key-frames would be localized in terms of clustered color variation and motion intensity. The distance of key-frame interval is also taken into account to keep the accumulated propagation errors under control and guarantee minimal user interaction. Once their depth maps are aligned with user interaction, the non-key-frames depth maps would be automatically propagated by shifted bilateral filtering. Considering that depth of objects may change due to the objects motion or camera zoom in/out effect, a bi-directional depth propagation scheme is adopted where a non-key frame is interpolated from two adjacent key frames. The experimental results show that the proposed scheme has better performance than existing 2D-to-3D scheme with fixed key-frame interval.

  1. 3-D TECATE/BREW: Thermal, stress, and birefringent ray-tracing codes for solid-state laser design

    NASA Astrophysics Data System (ADS)

    Gelinas, R. J.; Doss, S. K.; Nelson, R. G.

    1994-07-01

    This report describes the physics, code formulations, and numerics that are used in the TECATE (totally Eulerian code for anisotropic thermo-elasticity) and BREW (birefringent ray-tracing of electromagnetic waves) codes for laser design. These codes resolve thermal, stress, and birefringent optical effects in 3-D stationary solid-state systems. This suite of three constituent codes is a package referred to as LASRPAK.

  2. Radiation Coupling with the FUN3D Unstructured-Grid CFD Code

    NASA Technical Reports Server (NTRS)

    Wood, William A.

    2012-01-01

    The HARA radiation code is fully-coupled to the FUN3D unstructured-grid CFD code for the purpose of simulating high-energy hypersonic flows. The radiation energy source terms and surface heat transfer, under the tangent slab approximation, are included within the fluid dynamic ow solver. The Fire II flight test, at the Mach-31 1643-second trajectory point, is used as a demonstration case. Comparisons are made with an existing structured-grid capability, the LAURA/HARA coupling. The radiative surface heat transfer rates from the present approach match the benchmark values within 6%. Although radiation coupling is the focus of the present work, convective surface heat transfer rates are also reported, and are seen to vary depending upon the choice of mesh connectivity and FUN3D ux reconstruction algorithm. On a tetrahedral-element mesh the convective heating matches the benchmark at the stagnation point, but under-predicts by 15% on the Fire II shoulder. Conversely, on a mixed-element mesh the convective heating over-predicts at the stagnation point by 20%, but matches the benchmark away from the stagnation region.

  3. Development and preliminary verification of the 3D core neutronic code: COCO

    SciTech Connect

    Lu, H.; Mo, K.; Li, W.; Bai, N.; Li, J.

    2012-07-01

    As the recent blooming economic growth and following environmental concerns (China)) is proactively pushing forward nuclear power development and encouraging the tapping of clean energy. Under this situation, CGNPC, as one of the largest energy enterprises in China, is planning to develop its own nuclear related technology in order to support more and more nuclear plants either under construction or being operation. This paper introduces the recent progress in software development for CGNPC. The focus is placed on the physical models and preliminary verification results during the recent development of the 3D Core Neutronic Code: COCO. In the COCO code, the non-linear Green's function method is employed to calculate the neutron flux. In order to use the discontinuity factor, the Neumann (second kind) boundary condition is utilized in the Green's function nodal method. Additionally, the COCO code also includes the necessary physical models, e.g. single-channel thermal-hydraulic module, burnup module, pin power reconstruction module and cross-section interpolation module. The preliminary verification result shows that the COCO code is sufficient for reactor core design and analysis for pressurized water reactor (PWR). (authors)

  4. EMPulse, a new 3-D simulation code for electromagnetic pulse studies

    NASA Astrophysics Data System (ADS)

    Cohen, Bruce; Eng, Chester; Farmer, William; Friedman, Alex; Grote, David; Kruger, Hans; Larson, David

    2016-10-01

    EMPulse is a comprehensive and modern 3-D simulation code for electro-magnetic pulse (EMP) formation and propagation studies, being developed at LLNL as part of a suite of codes to study E1 EMP originating from prompt gamma rays. EMPulse builds upon the open-source Warp particle-in-cell code framework developed by members of this team and collaborators at other institutions. The goal of this endeavor is a new tool enabling the detailed and self-consistent study of multi-dimensional effects in geometries that have typically been treated only approximately. Here we present an overview of the project, the models and methods that have been developed and incorporated into EMPulse, tests of these models, comparisons to simulations undertaken in CHAP-lite (derived from the legacy code CHAP due to C. Longmire and co-workers), and some approaches to increased computational efficiency being studied within our project. This work was performed under the auspices of the U.S. DOE by Lawrence Livermore National Security, LLC, Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  5. Video coding with lifted wavelet transforms and complementary motion-compensated signals

    NASA Astrophysics Data System (ADS)

    Flierl, Markus H.; Vandergheynst, Pierre; Girod, Bernd

    2004-01-01

    This paper investigates video coding with wavelet transforms applied in the temporal direction of a video sequence. The wavelets are implemented with the lifting scheme in order to permit motion compensation between successive pictures. We improve motion compensation in the lifting steps and utilize complementary motion-compensated signals. Similar to superimposed predictive coding with complementary signals, this approach improves compression efficiency. We investigate experimentally and theoretically complementary motion-compensated signals for lifted wavelet transforms. Experimental results with the complementary motion-compensated Haar wavelet and frame-adaptive motion compensation show improvements in coding efficiency of up to 3 dB. The theoretical results demonstrate that the lifted Haar wavelet scheme with complementary motion-compensated signals is able to approach the bound for bit-rate savings of 2 bits per sample and motion-accuracy step when compared to optimum intra-frame coding of the input pictures.

  6. A new multimodal interactive way of subjective scoring of 3D video quality of experience

    NASA Astrophysics Data System (ADS)

    Kim, Taewan; Lee, Kwanghyun; Lee, Sanghoon; Bovik, Alan C.

    2014-03-01

    People that watch today's 3D visual programs, such as 3D cinema, 3D TV and 3D games, experience wide and dynamically varying ranges of 3D visual immersion and 3D quality of experience (QoE). It is necessary to be able to deploy reliable methodologies that measure each viewers subjective experience. We propose a new methodology that we call Multimodal Interactive Continuous Scoring of Quality (MICSQ). MICSQ is composed of a device interaction process between the 3D display and a separate device (PC, tablet, etc.) used as an assessment tool, and a human interaction process between the subject(s) and the device. The scoring process is multimodal, using aural and tactile cues to help engage and focus the subject(s) on their tasks. Moreover, the wireless device interaction process makes it possible for multiple subjects to assess 3D QoE simultaneously in a large space such as a movie theater, and at di®erent visual angles and distances.

  7. Perceptually-driven video coding with the Daala video codec

    NASA Astrophysics Data System (ADS)

    Cho, Yushin; Daede, Thomas J.; Egge, Nathan E.; Martres, Guillaume; Matthews, Tristan; Montgomery, Christopher; Terriberry, Timothy B.; Valin, Jean-Marc

    2016-09-01

    The Daala project is a royalty-free video codec that attempts to compete with the best patent-encumbered codecs. Part of our strategy is to replace core tools of traditional video codecs with alternative approaches, many of them designed to take perceptual aspects into account, rather than optimizing for simple metrics like PSNR. This paper documents some of our experiences with these tools, which ones worked and which did not. We evaluate which tools are easy to integrate into a more traditional codec design, and show results in the context of the codec being developed by the Alliance for Open Media.

  8. Recent developments in standardization of high efficiency video coding (HEVC)

    NASA Astrophysics Data System (ADS)

    Sullivan, Gary J.; Ohm, Jens-Rainer

    2010-08-01

    This paper reports on recent developments in video coding standardization, particularly focusing on the Call for Proposals (CfP) on video coding technology made jointly in January 2010 by ITU-T VCEG and ISO/IEC MPEG and the April 2010 responses to that Call. The new standardization initiative is referred to as High Efficiency Video Coding (HEVC) and its development has been undertaken by a new Joint Collaborative Team on Video Coding (JCT-VC) formed by the two organizations. The HEVC standard is intended to provide significantly better compression capability than the existing AVC (ITU-T H.264 | ISO/IEC MPEG-4 Part 10) standard. The results of the CfP are summarized, and the first steps towards the definition of the HEVC standard are described.

  9. Film grain noise modeling in advanced video coding

    NASA Astrophysics Data System (ADS)

    Oh, Byung Tae; Kuo, C.-C. Jay; Sun, Shijun; Lei, Shawmin

    2007-01-01

    A new technique for film grain noise extraction, modeling and synthesis is proposed and applied to the coding of high definition video in this work. The film grain noise is viewed as a part of artistic presentation by people in the movie industry. On one hand, since the film grain noise can boost the natural appearance of pictures in high definition video, it should be preserved in high-fidelity video processing systems. On the other hand, video coding with film grain noise is expensive. It is desirable to extract film grain noise from the input video as a pre-processing step at the encoder and re-synthesize the film grain noise and add it back to the decoded video as a post-processing step at the decoder. Under this framework, the coding gain of the denoised video is higher while the quality of the final reconstructed video can still be well preserved. Following this idea, we present a method to remove film grain noise from image/video without distorting its original content. Besides, we describe a parametric model containing a small set of parameters to represent the extracted film grain noise. The proposed model generates the film grain noise that is close to the real one in terms of power spectral density and cross-channel spectral correlation. Experimental results are shown to demonstrate the efficiency of the proposed scheme.

  10. Code verification for unsteady 3-D fluid-solid interaction problems

    NASA Astrophysics Data System (ADS)

    Yu, Kintak Raymond; Étienne, Stéphane; Hay, Alexander; Pelletier, Dominique

    2015-12-01

    This paper describes a procedure to synthesize Manufactured Solutions for Code Verification of an important class of Fluid-Structure Interaction (FSI) problems whose behaviors can be modeled as rigid body vibrations in incompressible fluids. We refer this class of FSI problems as Fluid-Solid Interaction problems, which can be found in many practical engineering applications. The methodology can be utilized to develop Manufactured Solutions for both 2-D and 3-D cases. We demonstrate the procedure with our numerical code. We present details of the formulation and methodology. We also provide the reasonings behind our proposed approach. Results from grid and time step refinement studies confirm the verification of our solver and demonstrate the versatility of the simple synthesis procedure. In addition, the results also demonstrate that the modified decoupled approach to verify flow problems with high-order time-stepping schemes can be employed equally well to verify code for multi-physics problems (here, those of the Fluid-Solid Interaction) when the numerical discretization is based on the Method of Lines.

  11. Liner Optimization Studies Using the Ducted Fan Noise Prediction Code TBIEM3D

    NASA Technical Reports Server (NTRS)

    Dunn, M. H.; Farassat, F.

    1998-01-01

    In this paper we demonstrate the usefulness of the ducted fan noise prediction code TBIEM3D as a liner optimization design tool. Boundary conditions on the interior duct wall allow for hard walls or a locally reacting liner with axially segmented, circumferentially uniform impedance. Two liner optimization studies are considered in which farfield noise attenuation due to the presence of a liner is maximized by adjusting the liner impedance. In the first example, the dependence of optimal liner impedance on frequency and liner length is examined. Results show that both the optimal impedance and attenuation levels are significantly influenced by liner length and frequency. In the second example, TBIEM3D is used to compare radiated sound pressure levels between optimal and non-optimal liner cases at conditions designed to simulate take-off. It is shown that significant noise reduction is achieved for most of the sound field by selecting the optimal or near optimal liner impedance. Our results also indicate that there is relatively large region of the impedance plane over which optimal or near optimal liner behavior is attainable. This is an important conclusion for the designer since there are variations in liner characteristics due to manufacturing imprecisions.

  12. ICRF Antenna Characteristics and Comparison with 3-D Code Calculation in the LHD

    SciTech Connect

    Mutoh, T.; Kasahara, H.; Seki, T.; Saito, K.; Kumazawa, R.; Shimpo, F.; Nomura, G.

    2009-11-26

    The plasma coupling characteristics and local heat spots of an ion cyclotron range of frequencies (ICRF) antenna in the Large Helical Device (LHD) are compared with the results of 3-D computing simulator code calculation. We studied several dependences of antenna loading resistances with plasma experimentally and observed a clear relation between the maximum injection power and the loading resistance. Realistic three-dimensional configuration of the ICRF antenna was taken into account to simulate the coupling characteristics and the local heat absorption near the ICRF antenna, which has a helically twisted geometry in the LHD. The electromagnetic field distribution and the current distribution on the antenna strap were calculated. We compared the RF absorption distribution on the antenna structure with the temperature rise during steady state operation and found that the temperature rise was well explained by comparing with the model simulation.

  13. Towards real-time change detection in videos based on existing 3D models

    NASA Astrophysics Data System (ADS)

    Ruf, Boitumelo; Schuchert, Tobias

    2016-10-01

    Image based change detection is of great importance for security applications, such as surveillance and reconnaissance, in order to find new, modified or removed objects. Such change detection can generally be performed by co-registration and comparison of two or more images. However, existing 3d objects, such as buildings, may lead to parallax artifacts in case of inaccurate or missing 3d information, which may distort the results in the image comparison process, especially when the images are acquired from aerial platforms like small unmanned aerial vehicles (UAVs). Furthermore, considering only intensity information may lead to failures in detection of changes in the 3d structure of objects. To overcome this problem, we present an approach that uses Structure-from-Motion (SfM) to compute depth information, with which a 3d change detection can be performed against an existing 3d model. Our approach is capable of the change detection in real-time. We use the input frames with the corresponding camera poses to compute dense depth maps by an image-based depth estimation algorithm. Additionally we synthesize a second set of depth maps, by rendering the existing 3d model from the same camera poses as those of the image-based depth map. The actual change detection is performed by comparing the two sets of depth maps with each other. Our method is evaluated on synthetic test data with corresponding ground truth as well as on real image test data.

  14. Achieving H.264-like compression efficiency with distributed video coding

    NASA Astrophysics Data System (ADS)

    Milani, Simone; Wang, Jiajun; Ramchandran, Kannan

    2007-01-01

    Recently, a new class of distributed source coding (DSC) based video coders has been proposed to enable low-complexity encoding. However, to date, these low-complexity DSC-based video encoders have been unable to compress as efficiently as motion-compensated predictive coding based video codecs, such as H.264/AVC, due to insufficiently accurate modeling of video data. In this work, we examine achieving H.264-like high compression efficiency with a DSC-based approach without the encoding complexity constraint. The success of H.264/AVC highlights the importance of accurately modeling the highly non-stationary video data through fine-granularity motion estimation. This motivates us to deviate from the popular approach of approaching the Wyner-Ziv bound with sophisticated capacity-achieving channel codes that require long block lengths and high decoding complexity, and instead focus on accurately modeling video data. Such a DSC-based, compression-centric encoder is an important first step towards building a robust DSC-based video coding framework.

  15. Lossless data compression studies for NOAA hyperspectral environmental suite using 3D integer wavelet transforms with 3D embedded zerotree coding

    NASA Astrophysics Data System (ADS)

    Huang, Bormin; Huang, Hung-Lung; Chen, Hao; Ahuja, Alok; Baggett, Kevin; Schmit, Timothy J.; Heymann, Roger W.

    2003-09-01

    Hyperspectral sounder data is a particular class of data that requires high accuracy for useful retrieval of atmospheric temperature and moisture profiles, surface characteristics, cloud properties, and trace gas information. Therefore compression of these data sets is better to be lossless or near lossless. The next-generation NOAA/NESDIS GOES-R hyperspectral sounder, now referred to as the HES (Hyperspectral Environmental Suite), will have hyperspectral resolution (over one thousand channels with spectral widths on the order of 0.5 wavenumber) and high spatial resolution (less than 10 km). Given the large volume of three-dimensional hyperspectral sounder data that will be generated by the HES instrument, the use of robust data compression techniques will be beneficial to data transfer and archive. In this paper, we study lossless data compression for the HES using 3D integer wavelet transforms via the lifting schemes. The wavelet coefficients are then processed with the 3D embedded zerotree wavelet (EZW) algorithm followed by context-based arithmetic coding. We extend the 3D EZW scheme to take on any size of 3D satellite data, each of whose dimensions need not be divisible by 2N, where N is the levels of the wavelet decomposition being performed. The compression ratios of various kinds of wavelet transforms are presented along with a comparison with the JPEG2000 codec.

  16. Study and simulation of low rate video coding schemes

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Yun-Chung; Kipp, G.

    1992-01-01

    The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design.

  17. Alignment of 3D Building Models and TIR Video Sequences with Line Tracking

    NASA Astrophysics Data System (ADS)

    Iwaszczuk, D.; Stilla, U.

    2014-11-01

    Thermal infrared imagery of urban areas became interesting for urban climate investigations and thermal building inspections. Using a flying platform such as UAV or a helicopter for the acquisition and combining the thermal data with the 3D building models via texturing delivers a valuable groundwork for large-area building inspections. However, such thermal textures are useful for further analysis if they are geometrically correctly extracted. This can be achieved with a good coregistrations between the 3D building models and thermal images, which cannot be achieved by direct georeferencing. Hence, this paper presents methodology for alignment of 3D building models and oblique TIR image sequences taken from a flying platform. In a single image line correspondences between model edges and image line segments are found using accumulator approach and based on these correspondences an optimal camera pose is calculated to ensure the best match between the projected model and the image structures. Among the sequence the linear features are tracked based on visibility prediction. The results of the proposed methodology are presented using a TIR image sequence taken from helicopter in a densely built-up urban area. The novelty of this work is given by employing the uncertainty of the 3D building models and by innovative tracking strategy based on a priori knowledge from the 3D building model and the visibility checking.

  18. The 3D MHD code GOEMHD3 for astrophysical plasmas with large Reynolds numbers. Code description, verification, and computational performance

    NASA Astrophysics Data System (ADS)

    Skála, J.; Baruffa, F.; Büchner, J.; Rampp, M.

    2015-08-01

    Context. The numerical simulation of turbulence and flows in almost ideal astrophysical plasmas with large Reynolds numbers motivates the implementation of magnetohydrodynamical (MHD) computer codes with low resistivity. They need to be computationally efficient and scale well with large numbers of CPU cores, allow obtaining a high grid resolution over large simulation domains, and be easily and modularly extensible, for instance, to new initial and boundary conditions. Aims: Our aims are the implementation, optimization, and verification of a computationally efficient, highly scalable, and easily extensible low-dissipative MHD simulation code for the numerical investigation of the dynamics of astrophysical plasmas with large Reynolds numbers in three dimensions (3D). Methods: The new GOEMHD3 code discretizes the ideal part of the MHD equations using a fast and efficient leap-frog scheme that is second-order accurate in space and time and whose initial and boundary conditions can easily be modified. For the investigation of diffusive and dissipative processes the corresponding terms are discretized by a DuFort-Frankel scheme. To always fulfill the Courant-Friedrichs-Lewy stability criterion, the time step of the code is adapted dynamically. Numerically induced local oscillations are suppressed by explicit, externally controlled diffusion terms. Non-equidistant grids are implemented, which enhance the spatial resolution, where needed. GOEMHD3 is parallelized based on the hybrid MPI-OpenMP programing paradigm, adopting a standard two-dimensional domain-decomposition approach. Results: The ideal part of the equation solver is verified by performing numerical tests of the evolution of the well-understood Kelvin-Helmholtz instability and of Orszag-Tang vortices. The accuracy of solving the (resistive) induction equation is tested by simulating the decay of a cylindrical current column. Furthermore, we show that the computational performance of the code scales very

  19. Validation of Heat Transfer and Film Cooling Capabilities of the 3-D RANS Code TURBO

    NASA Technical Reports Server (NTRS)

    Shyam, Vikram; Ameri, Ali; Chen, Jen-Ping

    2010-01-01

    The capabilities of the 3-D unsteady RANS code TURBO have been extended to include heat transfer and film cooling applications. The results of simulations performed with the modified code are compared to experiment and to theory, where applicable. Wilcox s k-turbulence model has been implemented to close the RANS equations. Two simulations are conducted: (1) flow over a flat plate and (2) flow over an adiabatic flat plate cooled by one hole inclined at 35 to the free stream. For (1) agreement with theory is found to be excellent for heat transfer, represented by local Nusselt number, and quite good for momentum, as represented by the local skin friction coefficient. This report compares the local skin friction coefficients and Nusselt numbers on a flat plate obtained using Wilcox's k-model with the theory of Blasius. The study looks at laminar and turbulent flows over an adiabatic flat plate and over an isothermal flat plate for two different wall temperatures. It is shown that TURBO is able to accurately predict heat transfer on a flat plate. For (2) TURBO shows good qualitative agreement with film cooling experiments performed on a flat plate with one cooling hole. Quantitatively, film effectiveness is under predicted downstream of the hole.

  20. Quantum self-correction in the 3D cubic code model.

    PubMed

    Bravyi, Sergey; Haah, Jeongwan

    2013-11-15

    A big open question in the quantum information theory concerns the feasibility of a self-correcting quantum memory. A quantum state recorded in such memory can be stored reliably for a macroscopic time without need for active error correction, if the memory is in contact with a cold enough thermal bath. Here we report analytic and numerical evidence for self-correcting behavior in the quantum spin lattice model known as the 3D cubic code. We prove that its memory time is at least L(cβ), where L is the lattice size, β is the inverse temperature of the bath, and c>0 is a constant coefficient. However, this bound applies only if the lattice size L does not exceed a critical value which grows exponentially with β. In that sense, the model can be called a partially self-correcting memory. We also report a Monte Carlo simulation indicating that our analytic bounds on the memory time are tight up to constant coefficients. To model the readout step we introduce a new decoding algorithm, which can be implemented efficiently for any topological stabilizer code. A longer version of this work can be found in Bravyi and Haah, arXiv:1112.3252.

  1. Semantic-preload video model based on VOP coding

    NASA Astrophysics Data System (ADS)

    Yang, Jianping; Zhang, Jie; Chen, Xiangjun

    2013-03-01

    In recent years, in order to reduce semantic gap which exists between high-level semantics and low-level features of video when the human understanding image or video, people mostly try the method of video annotation where in signal's downstream, namely further (again) attach labels to the content in video-database. Few people focus on the idea that: Use limited interaction and the means of comprehensive segmentation (including optical technologies) from the front-end of collection of video information (i.e. video camera), with video semantics analysis technology and corresponding concepts sets (i.e. ontology) which belong in a certain domain, as well as story shooting script and the task description of scene shooting etc; Apply different-level semantic descriptions to enrich the attributes of video object and the attributes of image region, then forms a new video model which is based on Video Object Plan (VOP) Coding. This model has potential intellectualized features, and carries a large amount of metadata, and embedded intermediate-level semantic concept into every object. This paper focuses on the latter, and presents a framework of a new video model. At present, this new video model is temporarily named "Video Model of Semantic-Preloaded or Semantic-Preload Video Model (simplified into VMoSP or SPVM)". This model mainly researches how to add labeling to video objects and image regions in real time, here video object and image region are usually used intermediate semantic labeling, and this work is placed on signal's upstream (i.e. video capture production stage). Because of the research needs, this paper also tries to analyses the hierarchic structure of video, and divides the hierarchic structure into nine hierarchy semantic levels, of course, this nine hierarchy only involved in video production process. In addition, the paper also point out that here semantic level tagging work (i.e. semantic preloading) only refers to the four middle-level semantic. All in

  2. SYDESCO: a laser-video scanner for 3D scoliosis evaluations.

    PubMed

    Treuillet, S; Lucas, Y; Crepin, G; Peuchot, B; Pichaud, J C

    2002-01-01

    SYDESCO is a new 3D vision system developed for trunk surface topography. This structured light surface scanner uses the principle of triangulation-based range sensing to infer 3D shape. The complete trunk acquisition is fast (2 seconds). The accuracy of the metric data is ensured by a subpixel image detection and a calibration process, which rectifies image deformations. A preliminary study presents results on 50 children in a gymnastics school. These children, aged between eight to sixteen years, are particularly exposed to spinal deformities. An asymmetry index is calculated from the 3D data to detect the pathologic cases. These results have been compared to an independent medical diagnosis. The system results have been confirmed for 72,1% of the patients.

  3. Structured Light Based 3d Scanning for Specular Surface by the Combination of Gray Code and Phase Shifting

    NASA Astrophysics Data System (ADS)

    Zhang, Yujia; Yilmaz, Alper

    2016-06-01

    Surface reconstruction using coded structured light is considered one of the most reliable techniques for high-quality 3D scanning. With a calibrated projector-camera stereo system, a light pattern is projected onto the scene and imaged by the camera. Correspondences between projected and recovered patterns are computed in the decoding process, which is used to generate 3D point cloud of the surface. However, the indirect illumination effects on the surface, such as subsurface scattering and interreflections, will raise the difficulties in reconstruction. In this paper, we apply maximum min-SW gray code to reduce the indirect illumination effects of the specular surface. We also analysis the errors when comparing the maximum min-SW gray code and the conventional gray code, which justifies that the maximum min-SW gray code has significant superiority to reduce the indirect illumination effects. To achieve sub-pixel accuracy, we project high frequency sinusoidal patterns onto the scene simultaneously. But for specular surface, the high frequency patterns are susceptible to decoding errors. Incorrect decoding of high frequency patterns will result in a loss of depth resolution. Our method to resolve this problem is combining the low frequency maximum min-SW gray code and the high frequency phase shifting code, which achieves dense 3D reconstruction for specular surface. Our contributions include: (i) A complete setup of the structured light based 3D scanning system; (ii) A novel combination technique of the maximum min-SW gray code and phase shifting code. First, phase shifting decoding with sub-pixel accuracy. Then, the maximum min-SW gray code is used to resolve the ambiguity resolution. According to the experimental results and data analysis, our structured light based 3D scanning system enables high quality dense reconstruction of scenes with a small number of images. Qualitative and quantitative comparisons are performed to extract the advantages of our new

  4. SCTP as scalable video coding transport

    NASA Astrophysics Data System (ADS)

    Ortiz, Jordi; Graciá, Eduardo Martínez; Skarmeta, Antonio F.

    2013-12-01

    This study presents an evaluation of the Stream Transmission Control Protocol (SCTP) for the transport of the scalable video codec (SVC), proposed by MPEG as an extension to H.264/AVC. Both technologies fit together properly. On the one hand, SVC permits to split easily the bitstream into substreams carrying different video layers, each with different importance for the reconstruction of the complete video sequence at the receiver end. On the other hand, SCTP includes features, such as the multi-streaming and multi-homing capabilities, that permit to transport robustly and efficiently the SVC layers. Several transmission strategies supported on baseline SCTP and its concurrent multipath transfer (CMT) extension are compared with the classical solutions based on the Transmission Control Protocol (TCP) and the Realtime Transmission Protocol (RTP). Using ns-2 simulations, it is shown that CMT-SCTP outperforms TCP and RTP in error-prone networking environments. The comparison is established according to several performance measurements, including delay, throughput, packet loss, and peak signal-to-noise ratio of the received video.

  5. Efficient block error concealment code for image and video transmission

    NASA Astrophysics Data System (ADS)

    Min, Jungki; Chan, Andrew K.

    1999-05-01

    Image and video compression standards such as JPEG, MPEG, H.263 are highly sensitive to error during transmission. Among typical error propagation mechanisms in video compression schemes, loss of block synchronization produces the worst image degradation. Even an error of a single bit in block synchronization may result in data to be placed in wrong positions that is caused by spatial shifts. Our proposed efficient block error concealment code (EBECC) virtually guarantees block synchronization and it improves coding efficiency by several hundred folds over the error resilient entropy code (EREC), proposed by N. G. Kingsbury and D. W. Redmill, depending on the image format and size. In addition, the EBECC produces slightly better resolution on the reconstructed images or video frames than those from the EREC. Another important advantage of the EBECC is that it does not require redundancy contrasting to the EREC that requires 2-3 percent of redundancy. Our preliminary results show the EBECC is 240 times faster than EREC for encoding and 330 times for decoding based on the CIF format of H.263 video coding standard. The EBECC can be used on most of the popular image and video compression schemes such as JPEG, MPEG, and H.263. Additionally, it is especially useful to wireless networks in which the percentage of image and video data is high.

  6. ORBXYZ: a 3D single-particle orbit code for following charged-particle trajectories in equilibrium magnetic fields

    SciTech Connect

    Anderson, D.V.; Cohen, R.H.; Ferguson, J.R.; Johnston, B.M.; Sharp, C.B.; Willmann, P.A.

    1981-06-30

    The single particle orbit code, TIBRO, has been modified extensively to improve the interpolation methods used and to allow use of vector potential fields in the simulation of charged particle orbits on a 3D domain. A 3D cubic B-spline algorithm is used to generate spline coefficients used in the interpolation. Smooth and accurate field representations are obtained. When vector potential fields are used, the 3D cubic spline interpolation formula analytically generates the magnetic field used to push the particles. This field has del.BETA = 0 to computer roundoff. When magnetic induction is used the interpolation allows del.BETA does not equal 0, which can lead to significant nonphysical results. Presently the code assumes quadrupole symmetry, but this is not an essential feature of the code and could be easily removed for other applications. Many details pertaining to this code are given on microfiche accompanying this report.

  7. 3-D Computer Animation vs. Live-Action Video: Differences in Viewers' Response to Instructional Vignettes

    ERIC Educational Resources Information Center

    Smith, Dennie; McLaughlin, Tim; Brown, Irving

    2012-01-01

    This study explored computer animation vignettes as a replacement for live-action video scenarios of classroom behavior situations previously used as an instructional resource in teacher education courses in classroom management strategies. The focus of the research was to determine if the embedded behavioral information perceived in a live-action…

  8. 3D Face Generation Tool Candide for Better Face Matching in Surveillance Video

    DTIC Science & Technology

    2014-07-01

    watch-list screening, biometrics , reliability, performance evaluation Community of Practice: Biometrics and Identity Management Canada Safety and...below. • Dmitry Gorodnichy, Eric Granger “PROVE-IT(FRiV): framework and results”. Also pub- lished in Proceedings of NIST International Biometrics ...Granger, “Evaluation of Face Recognition for Video Surveillance”. Also published in Proceedings of NIST International Biometric Performance Conference

  9. Evaluation of in-network adaptation of scalable high efficiency video coding (SHVC) in mobile environments

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio

    2014-02-01

    High Efficiency Video Coding (HEVC), the latest video compression standard (also known as H.265), can deliver video streams of comparable quality to the current H.264 Advanced Video Coding (H.264/AVC) standard with a 50% reduction in bandwidth. Research into SHVC, the scalable extension to the HEVC standard, is still in its infancy. One important area for investigation is whether, given the greater compression ratio of HEVC (and SHVC), the loss of packets containing video content will have a greater impact on the quality of delivered video than is the case with H.264/AVC or its scalable extension H.264/SVC. In this work we empirically evaluate the layer-based, in-network adaptation of video streams encoded using SHVC in situations where dynamically changing bandwidths and datagram loss ratios require the real-time adaptation of video streams. Through the use of extensive experimentation, we establish a comprehensive set of benchmarks for SHVC-based highdefinition video streaming in loss prone network environments such as those commonly found in mobile networks. Among other results, we highlight that packet losses of only 1% can lead to a substantial reduction in PSNR of over 3dB and error propagation in over 130 pictures following the one in which the loss occurred. This work would be one of the earliest studies in this cutting-edge area that reports benchmark evaluation results for the effects of datagram loss on SHVC picture quality and offers empirical and analytical insights into SHVC adaptation to lossy, mobile networking conditions.

  10. TOMO3D: 3-D joint refraction and reflection traveltime tomography parallel code for active-source seismic data—synthetic test

    NASA Astrophysics Data System (ADS)

    Meléndez, A.; Korenaga, J.; Sallarès, V.; Miniussi, A.; Ranero, C. R.

    2015-10-01

    We present a new 3-D traveltime tomography code (TOMO3D) for the modelling of active-source seismic data that uses the arrival times of both refracted and reflected seismic phases to derive the velocity distribution and the geometry of reflecting boundaries in the subsurface. This code is based on its popular 2-D version TOMO2D from which it inherited the methods to solve the forward and inverse problems. The traveltime calculations are done using a hybrid ray-tracing technique combining the graph and bending methods. The LSQR algorithm is used to perform the iterative regularized inversion to improve the initial velocity and depth models. In order to cope with an increased computational demand due to the incorporation of the third dimension, the forward problem solver, which takes most of the run time (˜90 per cent in the test presented here), has been parallelized with a combination of multi-processing and message passing interface standards. This parallelization distributes the ray-tracing and traveltime calculations among available computational resources. The code's performance is illustrated with a realistic synthetic example, including a checkerboard anomaly and two reflectors, which simulates the geometry of a subduction zone. The code is designed to invert for a single reflector at a time. A data-driven layer-stripping strategy is proposed for cases involving multiple reflectors, and it is tested for the successive inversion of the two reflectors. Layers are bound by consecutive reflectors, and an initial velocity model for each inversion step incorporates the results from previous steps. This strategy poses simpler inversion problems at each step, allowing the recovery of strong velocity discontinuities that would otherwise be smoothened.

  11. Non-intubated subxiphoid uniportal video-assisted thoracoscopic thymectomy using glasses-free 3D vision

    PubMed Central

    Jiang, Long; Liu, Jun; Shao, Wenlong; Li, Jingpei

    2016-01-01

    Trans-sternal thymectomy has long been accepted as the standard surgical procedure for thymic masses. Recently, minimally invasive methods, such as video-assisted thoracoscopic surgery (VATS) and, even more recently, non-intubated anesthesia, have emerged. These methods provide advantages including reductions in surgical trauma, postoperative associated pain, and in regards to VATS, provide certain cosmetic benefits. Considering these advantages, we herein present a case of subxiphoid uniportal VATS for thymic mass using a glasses-free 3D thoracoscopic display system. PMID:28149591

  12. Automatic feature detection for 3D surface reconstruction from HDTV endoscopic videos

    NASA Astrophysics Data System (ADS)

    Groch, Anja; Baumhauer, Matthias; Meinzer, Hans-Peter; Maier-Hein, Lena

    2010-02-01

    A growing number of applications in the field of computer-assisted laparoscopic interventions depend on accurate and fast 3D surface acquisition. The most commonly applied methods for 3D reconstruction of organ surfaces from 2D endoscopic images involve establishment of correspondences in image pairs to allow for computation of 3D point coordinates via triangulation. The popular feature-based approach for correspondence search applies a feature descriptor to compute high-dimensional feature vectors describing the characteristics of selected image points. Correspondences are established between image points with similar feature vectors. In a previous study, the performance of a large set of state-of-the art descriptors for the use in minimally invasive surgery was assessed. However, standard Phase Alternating Line (PAL) endoscopic images were utilized for this purpose. In this paper, we apply some of the best performing feature descriptors to in-vivo PAL endoscopic images as well as to High Definition Television (HDTV) endoscopic images of the same scene and show that the quality of the correspondences can be increased significantly when using high resolution images.

  13. Doing fieldwork on the seafloor: Photogrammetric techniques to yield 3D visual models from ROV video

    NASA Astrophysics Data System (ADS)

    Kwasnitschka, Tom; Hansteen, Thor H.; Devey, Colin W.; Kutterolf, Steffen

    2013-03-01

    Remotely Operated Vehicles (ROVs) have proven to be highly effective in recovering well localized samples and observations from the seafloor. In the course of ROV deployments, however, huge amounts of video and photographic data are gathered which present tremendous potential for data mining. We present a new workflow based on industrial software to derive fundamental field geology information such as quantitative stratigraphy and tectonic structures from ROV-based photo and video material. We demonstrate proof of principle tests for this workflow on video data collected during dives with the ROV Kiel 6000 on a new hot spot volcanic field that was recently identified southwest of the island of Santo Antão in the Cape Verdes. Our workflow allows us to derive three-dimensional models of outcrops facilitating quantitative measurements of joint orientation, bedding structure, grain size comparison and photo mosaicking within a georeferenced framework. The compiled data facilitate volcanological and tectonic interpretations from hand specimen to outcrop scales based on the quantified optical data. The demonstrated procedure is readily replicable and opens up possibilities for post-cruise "virtual fieldwork" on the seafloor.

  14. LINFLUX-AE: A Turbomachinery Aeroelastic Code Based on a 3-D Linearized Euler Solver

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Bakhle, M. A.; Trudell, J. J.; Mehmed, O.; Stefko, G. L.

    2004-01-01

    This report describes the development and validation of LINFLUX-AE, a turbomachinery aeroelastic code based on the linearized unsteady 3-D Euler solver, LINFLUX. A helical fan with flat plate geometry is selected as the test case for numerical validation. The steady solution required by LINFLUX is obtained from the nonlinear Euler/Navier Stokes solver TURBO-AE. The report briefly describes the salient features of LINFLUX and the details of the aeroelastic extension. The aeroelastic formulation is based on a modal approach. An eigenvalue formulation is used for flutter analysis. The unsteady aerodynamic forces required for flutter are obtained by running LINFLUX for each mode, interblade phase angle and frequency of interest. The unsteady aerodynamic forces for forced response analysis are obtained from LINFLUX for the prescribed excitation, interblade phase angle, and frequency. The forced response amplitude is calculated from the modal summation of the generalized displacements. The unsteady pressures, work done per cycle, eigenvalues and forced response amplitudes obtained from LINFLUX are compared with those obtained from LINSUB, TURBO-AE, ASTROP2, and ANSYS.

  15. Hyperspectral image compression: adapting SPIHT and EZW to anisotropic 3-D wavelet coding.

    PubMed

    Christophe, Emmanuel; Mailhes, Corinne; Duhamel, Pierre

    2008-12-01

    Hyperspectral images present some specific characteristics that should be used by an efficient compression system. In compression, wavelets have shown a good adaptability to a wide range of data, while being of reasonable complexity. Some wavelet-based compression algorithms have been successfully used for some hyperspectral space missions. This paper focuses on the optimization of a full wavelet compression system for hyperspectral images. Each step of the compression algorithm is studied and optimized. First, an algorithm to find the optimal 3-D wavelet decomposition in a rate-distortion sense is defined. Then, it is shown that a specific fixed decomposition has almost the same performance, while being more useful in terms of complexity issues. It is shown that this decomposition significantly improves the classical isotropic decomposition. One of the most useful properties of this fixed decomposition is that it allows the use of zero tree algorithms. Various tree structures, creating a relationship between coefficients, are compared. Two efficient compression methods based on zerotree coding (EZW and SPIHT) are adapted on this near-optimal decomposition with the best tree structure found. Performances are compared with the adaptation of JPEG 2000 for hyperspectral images on six different areas presenting different statistical properties.

  16. Selective encryption for H.264/AVC video coding

    NASA Astrophysics Data System (ADS)

    Shi, Tuo; King, Brian; Salama, Paul

    2006-02-01

    Due to the ease with which digital data can be manipulated and due to the ongoing advancements that have brought us closer to pervasive computing, the secure delivery of video and images has become a challenging problem. Despite the advantages and opportunities that digital video provide, illegal copying and distribution as well as plagiarism of digital audio, images, and video is still ongoing. In this paper we describe two techniques for securing H.264 coded video streams. The first technique, SEH264Algorithm1, groups the data into the following blocks of data: (1) a block that contains the sequence parameter set and the picture parameter set, (2) a block containing a compressed intra coded frame, (3) a block containing the slice header of a P slice, all the headers of the macroblock within the same P slice, and all the luma and chroma DC coefficients belonging to the all the macroblocks within the same slice, (4) a block containing all the ac coefficients, and (5) a block containing all the motion vectors. The first three are encrypted whereas the last two are not. The second method, SEH264Algorithm2, relies on the use of multiple slices per coded frame. The algorithm searches the compressed video sequence for start codes (0x000001) and then encrypts the next N bits of data.

  17. Implementation of wall boundary conditions for transpiration in F3D thin-layer Navier-Stokes code

    NASA Technical Reports Server (NTRS)

    Kandula, M.; Martin, F. W., Jr.

    1991-01-01

    Numerical boundary conditions for mass injection/suction at the wall are incorporated in the thin-layer Navier-Stokes code, F3D. The accuracy of the boundary conditions and the code is assessed by a detailed comparison of the predictions of velocity distributions and skin-friction coefficients with exact similarity solutions for laminar flow over a flat plate with variable blowing/suction, and measurements for turbulent flow past a flat plate with uniform blowing. In laminar flow, F3D predictions for friction coefficient compare well with exact similarity solution with and without suction, but produces large errors at moderate-to-large values of blowing. A slight Mach number dependence of skin-friction coefficient due to blowing in turbulent flow is computed by F3D code. Predicted surface pressures for turbulent flow past an airfoil with mass injection are in qualitative agreement with measurements for a flat plate.

  18. ORBXYZ: A 3D single-particle orbit code for following charged particle trajectories in equilibrium magnetic fields

    NASA Astrophysics Data System (ADS)

    Anderson, D. V.; Cohen, R. H.; Ferguson, J. R.; Johnston, B. M.; Sharp, C. B.; Willmann, P. A.

    1981-06-01

    The single particle orbit code, TIBRO, was modified extensively to improve the interpolation methods used and to allow use of vector potential fields in the simulation of charged particle orbits on a 3D domain. A 3D cubic B-spline algorithm is used to generate spline coefficients used in the interpolation. Smooth and accurate field representations are obtained. When vector potential fields are used, the 3D cubic spline interpolation formula analytically generates the magnetic field used to push the particles. This field has del.BETA = 0 to computer roundoff. When magnetic induction is used the interpolation allows del.BETA does not equal 0, which can lead to significant nonphysical results. Presently the code assumes quadrupole symmetry, but this is not an essential feature of the code and could be easily removed for other applications.

  19. An Integrated RELAP5-3D and Multiphase CFD Code System Utilizing a Semi Implicit Coupling Technique

    SciTech Connect

    D.L. Aumiller; E.T. Tomlinson; W.L. Weaver

    2001-06-21

    An integrated code system consisting of RELAP5-3D and a multiphase CFD program has been created through the use of a generic semi-implicit coupling algorithm. Unlike previous CFD coupling work, this coupling scheme is numerically stable provided the material Courant limit is not violated in RELAP5-3D or at the coupling locations. The basis for the coupling scheme and details regarding the unique features associated with the application of this technique to a four-field CFD program are presented. Finally, the results of a verification problem are presented. The coupled code system is shown to yield accurate and numerically stable results.

  20. Unequal-period combination approach of gray code and phase-shifting for 3-D visual measurement

    NASA Astrophysics Data System (ADS)

    Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin

    2016-09-01

    Combination of Gray code and phase-shifting is the most practical and advanced approach for the structured light 3-D measurement so far, which is able to measure objects with complex and discontinuous surface. However, for the traditional combination of the Gray code and phase-shifting, the captured Gray code images are not always sharp cut-off in the black-white conversion boundaries, which may lead to wrong decoding analog code orders. Moreover, during the actual measurement, there also exists local decoding error for the wrapped analog code obtained with the phase-shifting approach. Therefore, for the traditional approach, the wrong analog code orders and the local decoding errors will consequently introduce the errors which are equivalent to a fringe period when the analog code is unwrapped. In order to avoid one-fringe period errors, we propose an approach which combines Gray code with phase-shifting according to unequal period. With theoretical analysis, we build the measurement model of the proposed approach, determine the applicable condition and optimize the Gray code encoding period and phase-shifting fringe period. The experimental results verify that the proposed approach can offer a reliable unwrapped analog code, which can be used in 3-D shape measurement.

  1. A MCTF video coding scheme based on distributed source coding principles

    NASA Astrophysics Data System (ADS)

    Tagliasacchi, Marco; Tubaro, Stefano

    2005-07-01

    Motion Compensated Temporal Filtering (MCTF) has proved to be an efficient coding tool in the design of open-loop scalable video codecs. In this paper we propose a MCTF video coding scheme based on lifting where the prediction step is implemented using PRISM (Power efficient, Robust, hIgh compression Syndrome-based Multimedia coding), a video coding framework built on distributed source coding principles. We study the effect of integrating the update step at the encoder or at the decoder side. We show that the latter approach allows to improve the quality of the side information exploited during decoding. We present the analytical results obtained by modeling the video signal along the motion trajectories as a first order auto-regressive process. We show that the update step at the decoder allows to half the contribution of the quantization noise. We also include experimental results with real video data that demonstrate the potential of this approach when the video sequences are coded at low bitrates.

  2. A Robust Model-Based Coding Technique for Ultrasound Video

    NASA Technical Reports Server (NTRS)

    Docef, Alen; Smith, Mark J. T.

    1995-01-01

    This paper introduces a new approach to coding ultrasound video, the intended application being very low bit rate coding for transmission over low cost phone lines. The method exploits both the characteristic noise and the quasi-periodic nature of the signal. Data compression ratios between 250:1 and 1000:1 are shown to be possible, which is sufficient for transmission over ISDN and conventional phone lines. Preliminary results show this approach to be promising for remote ultrasound examinations.

  3. Practical distributed video coding in packet lossy channels

    NASA Astrophysics Data System (ADS)

    Qing, Linbo; Masala, Enrico; He, Xiaohai

    2013-07-01

    Improving error resilience of video communications over packet lossy channels is an important and tough task. We present a framework to optimize the quality of video communications based on distributed video coding (DVC) in practical packet lossy network scenarios. The peculiar characteristics of DVC indeed require a number of adaptations to take full advantage of its intrinsic robustness when dealing with data losses of typical real packet networks. This work proposes a new packetization scheme, an investigation of the best error-correcting codes to use in a noisy environment, a practical rate-allocation mechanism, which minimizes decoder feedback, and an improved side-information generation and reconstruction function. Performance comparisons are presented with respect to a conventional packet video communication using H.264/advanced video coding (AVC). Although currently the H.264/AVC rate-distortion performance in case of no loss is better than state-of-the-art DVC schemes, under practical packet lossy conditions, the proposed techniques provide better performance with respect to an H.264/AVC-based system, especially at high packet loss rates. Thus the error resilience of the proposed DVC scheme is superior to the one provided by H.264/AVC, especially in the case of transmission over packet lossy networks.

  4. Template based illumination compensation algorithm for multiview video coding

    NASA Astrophysics Data System (ADS)

    Li, Xiaoming; Jiang, Lianlian; Ma, Siwei; Zhao, Debin; Gao, Wen

    2010-07-01

    Recently multiview video coding (MVC) standard has been finalized as an extension of H.264/AVC by Joint Video Team (JVT). In the project Joint Multiview Video Model (JMVM) for the standardization, illumination compensation (IC) is adopted as a useful tool. In this paper, a novel illumination compensation algorithm based on template is proposed. The basic idea of the algorithm is that the illumination of the current block has a strong correlation with its adjacent template. Based on this idea, firstly a template based illumination compensation method is presented, and then a template models selection strategy is devised to improve the illumination compensation performance. The experimental results show that the proposed algorithm can improve the coding efficiency significantly.

  5. Development, Verification and Use of Gust Modeling in the NASA Computational Fluid Dynamics Code FUN3D

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2012-01-01

    This paper presents the implementation of gust modeling capability in the CFD code FUN3D. The gust capability is verified by computing the response of an airfoil to a sharp edged gust. This result is compared with the theoretical result. The present simulations will be compared with other CFD gust simulations. This paper also serves as a users manual for FUN3D gust analyses using a variety of gust profiles. Finally, the development of an Auto-Regressive Moving-Average (ARMA) reduced order gust model using a gust with a Gaussian profile in the FUN3D code is presented. ARMA simulated results of a sequence of one-minus-cosine gusts is shown to compare well with the same gust profile computed with FUN3D. Proper Orthogonal Decomposition (POD) is combined with the ARMA modeling technique to predict the time varying pressure coefficient increment distribution due to a novel gust profile. The aeroelastic response of a pitch/plunge airfoil to a gust environment is computed with a reduced order model, and compared with a direct simulation of the system in the FUN3D code. The two results are found to agree very well.

  6. Motion-compensated wavelet video coding using adaptive mode selection

    NASA Astrophysics Data System (ADS)

    Zhai, Fan; Pappas, Thrasyvoulos N.

    2004-01-01

    A motion-compensated wavelet video coder is presented that uses adaptive mode selection (AMS) for each macroblock (MB). The block-based motion estimation is performed in the spatial domain, and an embedded zerotree wavelet coder (EZW) is employed to encode the residue frame. In contrast to other motion-compensated wavelet video coders, where all the MBs are forced to be in INTER mode, we construct the residue frame by combining the prediction residual of the INTER MBs with the coding residual of the INTRA and INTER_ENCODE MBs. Different from INTER MBs that are not coded, the INTRA and INTER_ENCODE MBs are encoded separately by a DCT coder. By adaptively selecting the quantizers of the INTRA and INTER_ENCODE coded MBs, our goal is to equalize the characteristics of the residue frame in order to improve the overall coding efficiency of the wavelet coder. The mode selection is based on the variance of the MB, the variance of the prediction error, and the variance of the neighboring MBs' residual. Simulations show that the proposed motion-compensated wavelet video coder achieves a gain of around 0.7-0.8dB PSNR over MPEG-2 TM5, and a comparable PSNR to other 2D motion-compensated wavelet-based video codecs. It also provides potential visual quality improvement.

  7. Efficient broadcasting for scalable video coding streaming using random linear network coding

    NASA Astrophysics Data System (ADS)

    Lu, Ji; Xiao, Song; Wu, Chengke

    2010-08-01

    In order to improve the reconstructed quality of video sequence, a Random Linear Network Coding (RLNC) based video transmission scheme for Scalable Video Coding (SVC) is proposed in wireless broadcast scenario. A packetization model for SVC streaming is introduced to transmit the scalable bit streams conveniently, on the basis of which the RLNC based Unequal Error Protection (RUEP) method is proposed to improve the efficiency of video transmission. The RUEP's advantage lies in the fact that the redundancy protection of UEP can be efficiently determine by the capacity of broadcast channel. Simulation results show that RUEP can improve the reconstructed quality of video sequence compared with the traditional Store and Forward (SF) based transmission schemes.

  8. Application of the Finite Orbit Width Version of the CQL3D Code to Transport of Fast Ions

    NASA Astrophysics Data System (ADS)

    Petrov, Yu. V.; Harvey, R. W.

    2016-10-01

    The CQL3D bounce-averaged Fokker-Planck (FP) code now includes the ``fully'' neoclassical version in which the diffusion and advection processes are averaged over actual drift orbits, rather than using a 1st-order expansion. Incorporation of Finite-Orbit-Width (FOW) effects results in neoclassical radial transport caused by collisions, RF wave heating and by toroidal electric field (radial pinch). We apply the CQL3D-full-FOW code to study the thermalization and radial transport of high-energy particles, such as alpha-particles produced by fusion in ITER or deuterons from NBI in NSTX, under effect of their interaction with auxiliary RF waves. A particular attention is given to visualization of transport in 3D space of velocity +major-radius coordinates. Supported by USDOE Grants FC02-01ER54649, FG02-04ER54744, and SC0006614.

  9. Low complexity video coding using SMPTE VC-2

    NASA Astrophysics Data System (ADS)

    Borer, Tim

    2013-09-01

    Low complexity video coding addresses different applications, and is complementary to, video coding for delivery to the end user. Delivery codecs, such as the MPEG/ITU standards, provide very high compression ratios, but require high complexity and high latency. Some applications, by contrast, need the opposite characteristics of low complexity and low latency at low compression ratios. This paper discusses the applications and requirements of low complexity coding and, after discussing the prior art, describes the standard VC-2 (SMPTE 2042) codec, which is a wavelet codec designed for low complexity and ultra-low latency. VC-2 provides a wide range of coding parameters and compression ratios, allowing it to address applications such as texture coding, lossless and high dynamic range coding. In particular this paper describes the results for the low complexity coding parameters of 2 and 3 level Haar and LeGall wavelet kernels, for image regions of 4x4 and 8x8 pixels with both luma/color difference signals and RGB. The paper indicates the quality that may be achieved at various compression ratios and also clearly shows the benefit of coding luma and color components rather than RGB.

  10. Acoustic Scattering by Three-Dimensional Stators and Rotors Using the SOURCE3D Code. Volume 2; Scattering Plots

    NASA Technical Reports Server (NTRS)

    Meyer, Harold D.

    1999-01-01

    This second volume of Acoustic Scattering by Three-Dimensional Stators and Rotors Using the SOURCE3D Code provides the scattering plots referenced by Volume 1. There are 648 plots. Half are for the 8750 rpm "high speed" operating condition and the other half are for the 7031 rpm "mid speed" operating condition.

  11. Upgrades and application of FIT3D NBI-plasma interaction code in view of LHD deuterium campaigns

    NASA Astrophysics Data System (ADS)

    Vincenzi, P.; Bolzonella, T.; Murakami, S.; Osakabe, M.; Seki, R.; Yokoyama, M.

    2016-12-01

    This work presents an upgrade of the FIT3D neutral beam-plasma interaction code, part of TASK3D, a transport suite of codes, and its application to LHD experiments in the framework of the preparation for the first deuterium experiments in the LHD. The neutral beam injector (NBI) system will be upgraded to D injection, and efforts have been recently made to extend LHD modelling capabilities to D operations. The implemented upgrades for FIT3D to enable D NBI modelling in D plasmas are presented, with a discussion and benchmark of the models used. In particular, the beam ionization module has been modified and a routine for neutron production estimation has been implemented. The upgraded code is then used to evaluate the NBI power deposition in experiments with different plasma compositions. In the recent LHD campaign, in fact, He experiments have been run to help the prediction of main effects which may be relevant in future LHD D plasmas. Identical H/He experiments showed similar electron density and temperature profiles, while a higher ion temperature with an He majority has been observed. From first applications of the upgraded FIT3D code it turns out that, although more NB power appears to be coupled with the He plasma, the NBI power deposition is unaffected, suggesting that heat deposition does not play a key role in the increased ion temperature with He plasma.

  12. High-accuracy 3D image-based registration of endoscopic video to C-arm cone-beam CT for image-guided skull base surgery

    NASA Astrophysics Data System (ADS)

    Mirota, Daniel J.; Uneri, Ali; Schafer, Sebastian; Nithiananthan, Sajendra; Reh, Douglas D.; Gallia, Gary L.; Taylor, Russell H.; Hager, Gregory D.; Siewerdsen, Jeffrey H.

    2011-03-01

    Registration of endoscopic video to preoperative CT facilitates high-precision surgery of the head, neck, and skull-base. Conventional video-CT registration is limited by the accuracy of the tracker and does not use the underlying video or CT image data. A new image-based video registration method has been developed to overcome the limitations of conventional tracker-based registration. This method adds to a navigation system based on intraoperative C-arm cone-beam CT (CBCT), in turn providing high-accuracy registration of video to the surgical scene. The resulting registration enables visualization of the CBCT and planning data within the endoscopic video. The system incorporates a mobile C-arm, integrated with an optical tracking system, video endoscopy, deformable registration of preoperative CT with intraoperative CBCT, and 3D visualization. Similarly to tracker-based approach, the image-based video-CBCT registration the endoscope is localized with optical tracking system followed by a direct 3D image-based registration of the video to the CBCT. In this way, the system achieves video-CBCT registration that is both fast and accurate. Application in skull-base surgery demonstrates overlay of critical structures (e.g., carotid arteries) and surgical targets with sub-mm accuracy. Phantom and cadaver experiments show consistent improvement of target registration error (TRE) in video overlay over conventional tracker-based registration-e.g., 0.92mm versus 1.82mm for image-based and tracker-based registration, respectively. The proposed method represents a two-fold advance-first, through registration of video to up-to-date intraoperative CBCT, and second, through direct 3D image-based video-CBCT registration, which together provide more confident visualization of target and normal tissues within up-to-date images.

  13. Heat losses and 3D diffusion phenomena for defect sizing procedures in video pulse thermography

    NASA Astrophysics Data System (ADS)

    Ludwig, N.; Teruzzi, P.

    2002-06-01

    Dynamical thermographic techniques like video pulse thermography are very useful for the non-destructive testing of structural components. In literature different models were proposed, which allow to describe the time evolution of the thermal contrast for materials with sub-superficial defects. In the case of circular defect the time evolution of the full width half maximum (FWHM) of the thermal contrast was studied both theoretically and experimentally. Nevertheless a mismatch in defect sizing between experimental results and theoretical simulations was found. Possible explanations of this disagreement was analysed. A factor widely neglected is the heat loss (radiation and convection). In this paper a theoretical analysis of the influence of these contributions is reported. Furthermore in order to explain the experimental evidence of FWHM time evolution we introduced a correction due to lateral heat diffusion around the defect. In this way a possible explanation for the experimental results was obtained. Brick samples with a circular flat bottom hole as defect was tested both for the interest in defect sizing in building material through NDT and for the low thermal diffusivity of this material which allows the study of the phenomenon in a slow motion.

  14. Interaction and behaviour imaging: a novel method to measure mother-infant interaction using video 3D reconstruction.

    PubMed

    Leclère, C; Avril, M; Viaux-Savelon, S; Bodeau, N; Achard, C; Missonnier, S; Keren, M; Feldman, R; Chetouani, M; Cohen, D

    2016-05-24

    Studying early interaction is essential for understanding development and psychopathology. Automatic computational methods offer the possibility to analyse social signals and behaviours of several partners simultaneously and dynamically. Here, 20 dyads of mothers and their 13-36-month-old infants were videotaped during mother-infant interaction including 10 extremely high-risk and 10 low-risk dyads using two-dimensional (2D) and three-dimensional (3D) sensors. From 2D+3D data and 3D space reconstruction, we extracted individual parameters (quantity of movement and motion activity ratio for each partner) and dyadic parameters related to the dynamics of partners heads distance (contribution to heads distance), to the focus of mutual engagement (percentage of time spent face to face or oriented to the task) and to the dynamics of motion activity (synchrony ratio, overlap ratio, pause ratio). Features are compared with blind global rating of the interaction using the coding interactive behavior (CIB). We found that individual and dyadic parameters of 2D+3D motion features perfectly correlates with rated CIB maternal and dyadic composite scores. Support Vector Machine classification using all 2D-3D motion features classified 100% of the dyads in their group meaning that motion behaviours are sufficient to distinguish high-risk from low-risk dyads. The proposed method may present a promising, low-cost methodology that can uniquely use artificial technology to detect meaningful features of human interactions and may have several implications for studying dyadic behaviours in psychiatry. Combining both global rating scales and computerized methods may enable a continuum of time scale from a summary of entire interactions to second-by-second dynamics.

  15. Interaction and behaviour imaging: a novel method to measure mother–infant interaction using video 3D reconstruction

    PubMed Central

    Leclère, C; Avril, M; Viaux-Savelon, S; Bodeau, N; Achard, C; Missonnier, S; Keren, M; Feldman, R; Chetouani, M; Cohen, D

    2016-01-01

    Studying early interaction is essential for understanding development and psychopathology. Automatic computational methods offer the possibility to analyse social signals and behaviours of several partners simultaneously and dynamically. Here, 20 dyads of mothers and their 13–36-month-old infants were videotaped during mother–infant interaction including 10 extremely high-risk and 10 low-risk dyads using two-dimensional (2D) and three-dimensional (3D) sensors. From 2D+3D data and 3D space reconstruction, we extracted individual parameters (quantity of movement and motion activity ratio for each partner) and dyadic parameters related to the dynamics of partners heads distance (contribution to heads distance), to the focus of mutual engagement (percentage of time spent face to face or oriented to the task) and to the dynamics of motion activity (synchrony ratio, overlap ratio, pause ratio). Features are compared with blind global rating of the interaction using the coding interactive behavior (CIB). We found that individual and dyadic parameters of 2D+3D motion features perfectly correlates with rated CIB maternal and dyadic composite scores. Support Vector Machine classification using all 2D–3D motion features classified 100% of the dyads in their group meaning that motion behaviours are sufficient to distinguish high-risk from low-risk dyads. The proposed method may present a promising, low-cost methodology that can uniquely use artificial technology to detect meaningful features of human interactions and may have several implications for studying dyadic behaviours in psychiatry. Combining both global rating scales and computerized methods may enable a continuum of time scale from a summary of entire interactions to second-by-second dynamics. PMID:27219342

  16. Error resilient video coding using virtual reference picture

    NASA Astrophysics Data System (ADS)

    Zhang, Guanjun; Stevenson, Robert L.

    2005-03-01

    Due to widely used motion-compensated prediction coding, errors propagate along decoded video sequence and may result in severe quality degradation. Various methods have been reported to address this problem based on the common idea of diversifying prediction references. In this paper, we present an alternative way of concealing the references pictures errors. A generated virtual picture is used as a reference instead of an actual sequence picture in the temporal prediction. The virtual reference picture is generated in a way to filter damaged parts of previously decoded pictures so that the decoder can still get a clean reference picture in case of errors. Coding efficiency is effected due to the fact that the virtual reference is less correlated to the currently encoded picture. The simulations on H.264 codec have shown quality improvement of the proposed method over intra-coded macroblock refreshment. It can be used on any motion-compensated video codec to combat channel errors.

  17. Modeling and Analysis of a Lunar Space Reactor with the Computer Code RELAP5-3D/ATHENA

    SciTech Connect

    Carbajo, Juan J; Qualls, A L

    2008-01-01

    The transient analysis 3-dimensional (3-D) computer code RELAP5-3D/ATHENA has been employed to model and analyze a space reactor of 180 kW(thermal), 40 kW (net, electrical) with eight Stirling engines (SEs). Each SE will generate over 6 kWe; the excess power will be needed for the pumps and other power management devices. The reactor will be cooled by NaK (a eutectic mixture of sodium and potassium which is liquid at ambient temperature). This space reactor is intended to be deployed over the surface of the Moon or Mars. The reactor operating life will be 8 to 10 years. The RELAP5-3D/ATHENA code is being developed and maintained by Idaho National Laboratory. The code can employ a variety of coolants in addition to water, the original coolant employed with early versions of the code. The code can also use 3-D volumes and 3-D junctions, thus allowing for more realistic representation of complex geometries. A combination of 3-D and 1-D volumes is employed in this study. The space reactor model consists of a primary loop and two secondary loops connected by two heat exchangers (HXs). Each secondary loop provides heat to four SEs. The primary loop includes the nuclear reactor with the lower and upper plena, the core with 85 fuel pins, and two vertical heat exchangers (HX). The maximum coolant temperature of the primary loop is 900 K. The secondary loops also employ NaK as a coolant at a maximum temperature of 877 K. The SEs heads are at a temperature of 800 K and the cold sinks are at a temperature of ~400 K. Two radiators will be employed to remove heat from the SEs. The SE HXs surrounding the SE heads are of annular design and have been modeled using 3-D volumes. These 3-D models have been used to improve the HX design by optimizing the flows of coolant and maximizing the heat transferred to the SE heads. The transients analyzed include failure of one or more Stirling engines, trip of the reactor pump, and trips of the secondary loop pumps feeding the HXs of the

  18. Instantaneous helical axis estimation from 3-D video data in neck kinematics for whiplash diagnostics.

    PubMed

    Woltring, H J; Long, K; Osterbauer, P J; Fuhr, A W

    1994-12-01

    To date, the diagnosis of whiplash injuries has been very difficult and largely based on subjective, clinical assessment. The work by Winters and Peles Multiple Muscle Systems--Biomechanics and Movement Organization. Springer, New York (1990) suggests that the use of finite helical axes (FHAs) in the neck may provide an objective assessment tool for neck mobility. Thus, the position of the FHA describing head-trunk motion may allow discrimination between normal and pathological cases such as decreased mobility in particular cervical joints. For noisy, unsmoothed data, the FHAs must be taken over rather large angular intervals if the FHAs are to be reconstructed with sufficient accuracy; in the Winters and Peles study, these intervals were approximately 10 degrees. in order to study the movements' microstructure, the present investigation uses instantaneous helical axes (IHAs) estimated from low-pass smoothed video data. Here, the small-step noise sensitivity of the FHA no longer applies, and proper low-pass filtering allows estimation of the IHA even for small rotation velocity omega of the moving neck. For marker clusters mounted on the head and trunk, technical system validation showed that the IHAs direction dispersions were on the order of one degree, while their position dispersions were on the order of 1 mm, for low-pass cut-off frequencies of a few Hz (the dispersions were calculated from omega-weighted errors, in order to account for the adverse effects of vanishing omega). Various simple, planar models relating the instantaneous, 2-D centre of rotation with the geometry and kinematics of a multi-joint neck model are derived, in order to gauge the utility of the FHA and IHA approaches. Some preliminary results on asymptomatic and pathological subjects are provided, in terms of the 'ruled surface' formed by sampled IHAs and of their piercing points through the mid-sagittal plane during a prescribed flexion-extension movement of the neck.

  19. H.264 Layered Coded Video over Wireless Networks: Channel Coding and Modulation Constraints

    NASA Astrophysics Data System (ADS)

    Ghandi, M. M.; Barmada, B.; Jones, E. V.; Ghanbari, M.

    2006-12-01

    This paper considers the prioritised transmission of H.264 layered coded video over wireless channels. For appropriate protection of video data, methods such as prioritised forward error correction coding (FEC) or hierarchical quadrature amplitude modulation (HQAM) can be employed, but each imposes system constraints. FEC provides good protection but at the price of a high overhead and complexity. HQAM is less complex and does not introduce any overhead, but permits only fixed data ratios between the priority layers. Such constraints are analysed and practical solutions are proposed for layered transmission of data-partitioned and SNR-scalable coded video where combinations of HQAM and FEC are used to exploit the advantages of both coding methods. Simulation results show that the flexibility of SNR scalability and absence of picture drift imply that SNR scalability as modelled is superior to data partitioning in such applications.

  20. Scene-aware joint global and local homographic video coding

    NASA Astrophysics Data System (ADS)

    Peng, Xiulian; Xu, Jizheng; Sullivan, Gary J.

    2016-09-01

    Perspective motion is commonly represented in video content that is captured and compressed for various applications including cloud gaming, vehicle and aerial monitoring, etc. Existing approaches based on an eight-parameter homography motion model cannot deal with this efficiently, either due to low prediction accuracy or excessive bit rate overhead. In this paper, we consider the camera motion model and scene structure in such video content and propose a joint global and local homography motion coding approach for video with perspective motion. The camera motion is estimated by a computer vision approach, and camera intrinsic and extrinsic parameters are globally coded at the frame level. The scene is modeled as piece-wise planes, and three plane parameters are coded at the block level. Fast gradient-based approaches are employed to search for the plane parameters for each block region. In this way, improved prediction accuracy and low bit costs are achieved. Experimental results based on the HEVC test model show that up to 9.1% bit rate savings can be achieved (with equal PSNR quality) on test video content with perspective motion. Test sequences for the example applications showed a bit rate savings ranging from 3.7 to 9.1%.

  1. Displaying 3D radiation dose on endoscopic video for therapeutic assessment and surgical guidance.

    PubMed

    Qiu, Jimmy; Hope, Andrew J; Cho, B C John; Sharpe, Michael B; Dickie, Colleen I; DaCosta, Ralph S; Jaffray, David A; Weersink, Robert A

    2012-10-21

    We have developed a method to register and display 3D parametric data, in particular radiation dose, on two-dimensional endoscopic images. This registration of radiation dose to endoscopic or optical imaging may be valuable in assessment of normal tissue response to radiation, and visualization of radiated tissues in patients receiving post-radiation surgery. Electromagnetic sensors embedded in a flexible endoscope were used to track the position and orientation of the endoscope allowing registration of 2D endoscopic images to CT volumetric images and radiation doses planned with respect to these images. A surface was rendered from the CT image based on the air/tissue threshold, creating a virtual endoscopic view analogous to the real endoscopic view. Radiation dose at the surface or at known depth below the surface was assigned to each segment of the virtual surface. Dose could be displayed as either a colorwash on this surface or surface isodose lines. By assigning transparency levels to each surface segment based on dose or isoline location, the virtual dose display was overlaid onto the real endoscope image. Spatial accuracy of the dose display was tested using a cylindrical phantom with a treatment plan created for the phantom that matched dose levels with grid lines on the phantom surface. The accuracy of the dose display in these phantoms was 0.8-0.99 mm. To demonstrate clinical feasibility of this approach, the dose display was also tested on clinical data of a patient with laryngeal cancer treated with radiation therapy, with estimated display accuracy of ∼2-3 mm. The utility of the dose display for registration of radiation dose information to the surgical field was further demonstrated in a mock sarcoma case using a leg phantom. With direct overlay of radiation dose on endoscopic imaging, tissue toxicities and tumor response in endoluminal organs can be directly correlated with the actual tissue dose, offering a more nuanced assessment of normal tissue

  2. Functionally Layered Video Coding for Water Level Monitoring

    NASA Astrophysics Data System (ADS)

    Udomsiri, Sakol; Iwahashi, Masahiro; Muramatsu, Shogo

    This paper proposes a new type of layered video coding especially for the use of monitoring water level of a river. A sensor node of the system decomposes an input video signal into some kinds of component signals and produces a bit stream functionally separated into three layers. The first layer contains the minimum components effective for detecting the water level. It is transmitted at very low bit rate for regular monitoring. The second layer contains signals for thumb-nail video browsing. The third layer contains additional data for decoding the original video signal. These are transmitted in case of necessity. A video signal is decomposed into several bands with the three dimensional Haar transform. In this paper, optimum bands to be contained into the 1st layer are experimentally investigated considering both of water level detection and data size to be transmitted. As a result, bit rate for transmitting the first layer is reduced by 32.5% at the cost of negligible 3.7% decrease of recognition performance for one of video examples.

  3. Validation of the BISON 3D Fuel Performance Code: Temperature Comparisons for Concentrically and Eccentrically Located Fuel Pellets

    SciTech Connect

    J. D. Hales; D. M. Perez; R. L. Williamson; S. R. Novascone; B. W. Spencer

    2013-03-01

    BISON is a modern finite-element based nuclear fuel performance code that has been under development at the Idaho National Laboratory (USA) since 2009. The code is applicable to both steady and transient fuel behaviour and is used to analyse either 2D axisymmetric or 3D geometries. BISON has been applied to a variety of fuel forms including LWR fuel rods, TRISO-coated fuel particles, and metallic fuel in both rod and plate geometries. Code validation is currently in progress, principally by comparison to instrumented LWR fuel rods. Halden IFA experiments constitute a large percentage of the current BISON validation base. The validation emphasis here is centreline temperatures at the beginning of fuel life, with comparisons made to seven rods from the IFA-431 and 432 assemblies. The principal focus is IFA-431 Rod 4, which included concentric and eccentrically located fuel pellets. This experiment provides an opportunity to explore 3D thermomechanical behaviour and assess the 3D simulation capabilities of BISON. Analysis results agree with experimental results showing lower fuel centreline temperatures for eccentric fuel with the peak temperature shifted from the centreline. The comparison confirms with modern 3D analysis tools that the measured temperature difference between concentric and eccentric pellets is not an artefact and provides a quantitative explanation for the difference.

  4. User's manual for three dimensional boundary layer (BL3-D) code

    NASA Technical Reports Server (NTRS)

    Anderson, O. L.; Caplin, B.

    1985-01-01

    An assessment has been made of the applicability of a 3-D boundary layer analysis to the calculation of heat transfer, total pressure losses, and streamline flow patterns on the surface of both stationary and rotating turbine passages. In support of this effort, an analysis has been developed to calculate a general nonorthogonal surface coordinate system for arbitrary 3-D surfaces and also to calculate the boundary layer edge conditions for compressible flow using the surface Euler equations and experimental data to calibrate the method, calculations are presented for the pressure endwall, and suction surfaces of a stationary cascade and for the pressure surface of a rotating turbine blade. The results strongly indicate that the 3-D boundary layer analysis can give good predictions of the flow field, loss, and heat transfer on the pressure, suction, and endwall surface of a gas turbine passage.

  5. User's manual for three dimensional boundary layer (BL3-D) code

    NASA Astrophysics Data System (ADS)

    Anderson, O. L.; Caplin, B.

    1985-08-01

    An assessment has been made of the applicability of a 3-D boundary layer analysis to the calculation of heat transfer, total pressure losses, and streamline flow patterns on the surface of both stationary and rotating turbine passages. In support of this effort, an analysis has been developed to calculate a general nonorthogonal surface coordinate system for arbitrary 3-D surfaces and also to calculate the boundary layer edge conditions for compressible flow using the surface Euler equations and experimental data to calibrate the method, calculations are presented for the pressure endwall, and suction surfaces of a stationary cascade and for the pressure surface of a rotating turbine blade. The results strongly indicate that the 3-D boundary layer analysis can give good predictions of the flow field, loss, and heat transfer on the pressure, suction, and endwall surface of a gas turbine passage.

  6. TART97 a coupled neutron-photon 3-D, combinatorial geometry Monte Carlo transport code

    SciTech Connect

    Cullen, D.E.

    1997-11-22

    TART97 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART97 is distributed on CD. This CD contains on- line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and its data riles.

  7. Video coding using Karhunen-Loeve transform and motion compensation

    NASA Astrophysics Data System (ADS)

    Musatenko, Yurij S.; Soloveyko, Olexandr M.; Kurashov, Vitalij N.; Dubikovskiy, Vladislav A.

    1999-07-01

    The paper present a new method for video compression. The discussed techniques consider video frames as a set of correlated images. Common approach to the problem of compression of correlated images is to use some orthogonal transform, for example cosine or wavelet transform, in order to remove the correlation among images and then to compress resolution coefficients using already known compression technique such as JPEG or EZW. However, the most optimal representation for removing of correlation among images is Karhunen-Loeve (KL) transform. In the paper we apply recently proposed Optimal Image Coding using KL transform method (OICKL) based on this approach. In order to take into account the nature of video we use Triangle Motion Compensation to improve correlation among frames. Experimental part compares the performance of plain OICKL codec with OICKL and motion compensation combined. Recommendations concerning using of motion compensation with OICKL technique are worked out.

  8. A robust low-rate coding scheme for packet video

    NASA Technical Reports Server (NTRS)

    Chen, Y. C.; Sayood, Khalid; Nelson, D. J.; Arikan, E. (Editor)

    1991-01-01

    Due to the rapidly evolving field of image processing and networking, video information promises to be an important part of telecommunication systems. Although up to now video transmission has been transported mainly over circuit-switched networks, it is likely that packet-switched networks will dominate the communication world in the near future. Asynchronous transfer mode (ATM) techniques in broadband-ISDN can provide a flexible, independent and high performance environment for video communication. For this paper, the network simulator was used only as a channel in this simulation. Mixture blocking coding with progressive transmission (MBCPT) has been investigated for use over packet networks and has been found to provide high compression rate with good visual performance, robustness to packet loss, tractable integration with network mechanics and simplicity in parallel implementation.

  9. A Watermarking Scheme for High Efficiency Video Coding (HEVC)

    PubMed Central

    Swati, Salahuddin; Hayat, Khizar; Shahid, Zafar

    2014-01-01

    This paper presents a high payload watermarking scheme for High Efficiency Video Coding (HEVC). HEVC is an emerging video compression standard that provides better compression performance as compared to its predecessor, i.e. H.264/AVC. Considering that HEVC may will be used in a variety of applications in the future, the proposed algorithm has a high potential of utilization in applications involving broadcast and hiding of metadata. The watermark is embedded into the Quantized Transform Coefficients (QTCs) during the encoding process. Later, during the decoding process, the embedded message can be detected and extracted completely. The experimental results show that the proposed algorithm does not significantly affect the video quality, nor does it escalate the bitrate. PMID:25144455

  10. The digital code driven autonomous synthesis of ibuprofen automated in a 3D-printer-based robot.

    PubMed

    Kitson, Philip J; Glatzel, Stefan; Cronin, Leroy

    2016-01-01

    An automated synthesis robot was constructed by modifying an open source 3D printing platform. The resulting automated system was used to 3D print reaction vessels (reactionware) of differing internal volumes using polypropylene feedstock via a fused deposition modeling 3D printing approach and subsequently make use of these fabricated vessels to synthesize the nonsteroidal anti-inflammatory drug ibuprofen via a consecutive one-pot three-step approach. The synthesis of ibuprofen could be achieved on different scales simply by adjusting the parameters in the robot control software. The software for controlling the synthesis robot was written in the python programming language and hard-coded for the synthesis of ibuprofen by the method described, opening possibilities for the sharing of validated synthetic 'programs' which can run on similar low cost, user-constructed robotic platforms towards an 'open-source' regime in the area of chemical synthesis.

  11. The digital code driven autonomous synthesis of ibuprofen automated in a 3D-printer-based robot

    PubMed Central

    Kitson, Philip J; Glatzel, Stefan

    2016-01-01

    An automated synthesis robot was constructed by modifying an open source 3D printing platform. The resulting automated system was used to 3D print reaction vessels (reactionware) of differing internal volumes using polypropylene feedstock via a fused deposition modeling 3D printing approach and subsequently make use of these fabricated vessels to synthesize the nonsteroidal anti-inflammatory drug ibuprofen via a consecutive one-pot three-step approach. The synthesis of ibuprofen could be achieved on different scales simply by adjusting the parameters in the robot control software. The software for controlling the synthesis robot was written in the python programming language and hard-coded for the synthesis of ibuprofen by the method described, opening possibilities for the sharing of validated synthetic ‘programs’ which can run on similar low cost, user-constructed robotic platforms towards an ‘open-source’ regime in the area of chemical synthesis. PMID:28144350

  12. Recent Hydrodynamics Improvements to the RELAP5-3D Code

    SciTech Connect

    Richard A. Riemke; Cliff B. Davis; Richard.R. Schultz

    2009-07-01

    The hydrodynamics section of the RELAP5-3D computer program has been recently improved. Changes were made as follows: (1) improved turbine model, (2) spray model for the pressurizer model, (3) feedwater heater model, (4) radiological transport model, (5) improved pump model, and (6) compressor model.

  13. Simulations of 3D LPI's relevant to IFE using the PIC code OSIRIS

    NASA Astrophysics Data System (ADS)

    Tsung, F. S.; Mori, W. B.; Winjum, B. J.

    2014-10-01

    We will study three dimensional effects of laser plasma instabilities, including backward raman scattering, the high frequency hybrid instability, and the two plasmon instability using OSIRIS in 3D Cartesian geometry and cylindrical 2D OSIRIS with azimuthal mode decompositions. With our new capabilities we hope to demonstrate that we are capable of studying single speckle physics relevant to IFE in an efficent manner.

  14. Preliminary Results from the Application of Automated Adjoint Code Generation to CFL3D

    NASA Technical Reports Server (NTRS)

    Carle, Alan; Fagan, Mike; Green, Lawrence L.

    1998-01-01

    This report describes preliminary results obtained using an automated adjoint code generator for Fortran to augment a widely-used computational fluid dynamics flow solver to compute derivatives. These preliminary results with this augmented code suggest that, even in its infancy, the automated adjoint code generator can accurately and efficiently deliver derivatives for use in transonic Euler-based aerodynamic shape optimization problems with hundreds to thousands of independent design variables.

  15. Assessment of 3D Codes for Predicting Liner Attenuation in Flow Ducts

    NASA Technical Reports Server (NTRS)

    Watson, W. R.; Nark, D. M.; Jones, M. G.

    2008-01-01

    This paper presents comparisons of seven propagation codes for predicting liner attenuation in ducts with flow. The selected codes span the spectrum of methods available (finite element, parabolic approximation, and pseudo-time domain) and are collectively representative of the state-of-art in the liner industry. These codes are included because they have two-dimensional and three-dimensional versions and can be exported to NASA's Columbia Supercomputer. The basic assumptions, governing differential equations, boundary conditions, and numerical methods underlying each code are briefly reviewed and an assessment is performed based on two predefined metrics. The two metrics used in the assessment are the accuracy of the predicted attenuation and the amount of wall clock time to predict the attenuation. The assessment is performed over a range of frequencies, mean flow rates, and grazing flow liner impedances commonly used in the liner industry. The primary conclusions of the study are (1) predicted attenuations are in good agreement for rigid wall ducts, (2) the majority of codes compare well to each other and to approximate results from mode theory for soft wall ducts, (3) most codes compare well to measured data on a statistical basis, (4) only the finite element codes with cubic Hermite polynomials capture extremely large attenuations, and (5) wall clock time increases by an order of magnitude or more are observed for a three-dimensional code relative to the corresponding two-dimensional version of the same code.

  16. Motion Information Inferring Scheme for Multi-View Video Coding

    NASA Astrophysics Data System (ADS)

    Koo, Han-Suh; Jeon, Yong-Joon; Jeon, Byeong-Moon

    This letter proposes a motion information inferring scheme for multi-view video coding motivated by the idea that the aspect of motion vector between the corresponding positions in the neighboring view pair is quite similar. The proposed method infers the motion information from the corresponding macroblock in the neighboring view after RD optimization with the existing prediction modes. This letter presents evaluation showing that the method significantly enhances the efficiency especially at high bit rates.

  17. iRegNet3D: three-dimensional integrated regulatory network for the genomic analysis of coding and non-coding disease mutations.

    PubMed

    Liang, Siqi; Tippens, Nathaniel D; Zhou, Yaoda; Mort, Matthew; Stenson, Peter D; Cooper, David N; Yu, Haiyuan

    2017-01-18

    The mechanistic details of most disease-causing mutations remain poorly explored within the context of regulatory networks. We present a high-resolution three-dimensional integrated regulatory network (iRegNet3D) in the form of a web tool, where we resolve the interfaces of all known transcription factor (TF)-TF, TF-DNA and chromatin-chromatin interactions for the analysis of both coding and non-coding disease-associated mutations to obtain mechanistic insights into their functional impact. Using iRegNet3D, we find that disease-associated mutations may perturb the regulatory network through diverse mechanisms including chromatin looping. iRegNet3D promises to be an indispensable tool in large-scale sequencing and disease association studies.

  18. Picturewise inter-view prediction selection for multiview video coding

    NASA Astrophysics Data System (ADS)

    Huo, Junyan; Chang, Yilin; Li, Ming; Yang, Haitao

    2010-11-01

    Inter-view prediction is introduced in multiview video coding (MVC) to exploit the inter-view correlation. Statistical analyses show that the coding gain benefited from inter-view prediction is unequal among pictures. On the basis of this observation, a picturewise interview prediction selection scheme is proposed. This scheme employs a novel inter-view prediction selection criterion to determine whether it is necessary to apply inter-view prediction to the current coding picture. This criterion is derived from the available coding information of the temporal reference pictures. Experimental results show that the proposed scheme can improve the performance of MVC with a comprehensive consideration of compression efficiency, computational complexity, and random access ability.

  19. Distributed Coding/Decoding Complexity in Video Sensor Networks

    PubMed Central

    Cordeiro, Paulo J.; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality. PMID:22736972

  20. Implementation of a 3D halo neutral model in the TRANSP code and application to projected NSTX-U plasmas

    SciTech Connect

    Medley, S. S.; Liu, D.; Gorelenkova, M. V.; Heidbrink, W. W.; Stagner, L.

    2016-01-12

    A 3D halo neutral code developed at the Princeton Plasma Physics Laboratory and implemented for analysis using the TRANSP code is applied to projected National Spherical Torus eXperiment-Upgrade (NSTX-U plasmas). The legacy TRANSP code did not handle halo neutrals properly since they were distributed over the plasma volume rather than remaining in the vicinity of the neutral beam footprint as is actually the case. The 3D halo neutral code uses a 'beam-in-a-box' model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components produce first generation halo neutrals that are tracked through successive generations until an ionization event occurs or the descendant halos exit the box. The 3D halo neutral model and neutral particle analyzer (NPA) simulator in the TRANSP code have been benchmarked with the Fast-Ion D-Alpha simulation (FIDAsim) code, which provides Monte Carlo simulations of beam neutral injection, attenuation, halo generation, halo spatial diffusion, and photoemission processes. When using the same atomic physics database, TRANSP and FIDAsim simulations achieve excellent agreement on the spatial profile and magnitude of beam and halo neutral densities and the NPA energy spectrum. The simulations show that the halo neutral density can be comparable to the beam neutral density. These halo neutrals can double the NPA flux, but they have minor effects on the NPA energy spectrum shape. The TRANSP and FIDAsim simulations also suggest that the magnitudes of beam and halo neutral densities are relatively sensitive to the choice of the atomic physics databases.

  1. Implementation of a 3D halo neutral model in the TRANSP code and application to projected NSTX-U plasmas

    NASA Astrophysics Data System (ADS)

    Medley, S. S.; Liu, D.; Gorelenkova, M. V.; Heidbrink, W. W.; Stagner, L.

    2016-02-01

    A 3D halo neutral code developed at the Princeton Plasma Physics Laboratory and implemented for analysis using the TRANSP code is applied to projected National Spherical Torus eXperiment-Upgrade (NSTX-U plasmas). The legacy TRANSP code did not handle halo neutrals properly since they were distributed over the plasma volume rather than remaining in the vicinity of the neutral beam footprint as is actually the case. The 3D halo neutral code uses a ‘beam-in-a-box’ model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components produce first generation halo neutrals that are tracked through successive generations until an ionization event occurs or the descendant halos exit the box. The 3D halo neutral model and neutral particle analyzer (NPA) simulator in the TRANSP code have been benchmarked with the Fast-Ion D-Alpha simulation (FIDAsim) code, which provides Monte Carlo simulations of beam neutral injection, attenuation, halo generation, halo spatial diffusion, and photoemission processes. When using the same atomic physics database, TRANSP and FIDAsim simulations achieve excellent agreement on the spatial profile and magnitude of beam and halo neutral densities and the NPA energy spectrum. The simulations show that the halo neutral density can be comparable to the beam neutral density. These halo neutrals can double the NPA flux, but they have minor effects on the NPA energy spectrum shape. The TRANSP and FIDAsim simulations also suggest that the magnitudes of beam and halo neutral densities are relatively sensitive to the choice of the atomic physics databases.

  2. Analysis and Compensation for Lateral Chromatic Aberration in a Color Coding Structured Light 3D Measurement System

    PubMed Central

    Huang, Junhui; Xue, Qi; Wang, Zhao; Gao, Jianmin

    2016-01-01

    While color-coding methods have improved the measuring efficiency of a structured light three-dimensional (3D) measurement system, they decreased the measuring accuracy significantly due to lateral chromatic aberration (LCA). In this study, the LCA in a structured light measurement system is analyzed, and a method is proposed to compensate the error caused by the LCA. Firstly, based on the projective transformation, a 3D error map of LCA is constructed in the projector images by using a flat board and comparing the image coordinates of red, green and blue circles with the coordinates of white circles at preselected sample points within the measurement volume. The 3D map consists of the errors, which are the equivalent errors caused by LCA of the camera and projector. Then in measurements, error values of LCA are calculated and compensated to correct the projector image coordinates through the 3D error map and a tri-linear interpolation method. Eventually, 3D coordinates with higher accuracy are re-calculated according to the compensated image coordinates. The effectiveness of the proposed method is verified in the following experiments. PMID:27598174

  3. Analysis and Compensation for Lateral Chromatic Aberration in a Color Coding Structured Light 3D Measurement System.

    PubMed

    Huang, Junhui; Xue, Qi; Wang, Zhao; Gao, Jianmin

    2016-09-03

    While color-coding methods have improved the measuring efficiency of a structured light three-dimensional (3D) measurement system, they decreased the measuring accuracy significantly due to lateral chromatic aberration (LCA). In this study, the LCA in a structured light measurement system is analyzed, and a method is proposed to compensate the error caused by the LCA. Firstly, based on the projective transformation, a 3D error map of LCA is constructed in the projector images by using a flat board and comparing the image coordinates of red, green and blue circles with the coordinates of white circles at preselected sample points within the measurement volume. The 3D map consists of the errors, which are the equivalent errors caused by LCA of the camera and projector. Then in measurements, error values of LCA are calculated and compensated to correct the projector image coordinates through the 3D error map and a tri-linear interpolation method. Eventually, 3D coordinates with higher accuracy are re-calculated according to the compensated image coordinates. The effectiveness of the proposed method is verified in the following experiments.

  4. Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes

    SciTech Connect

    Frambati, S.; Frignani, M.

    2012-07-01

    We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design for radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)

  5. Numerical Simulation of Two-grid Ion Optics Using a 3D Code

    NASA Technical Reports Server (NTRS)

    Anderson, John R.; Katz, Ira; Goebel, Dan

    2004-01-01

    A three-dimensional ion optics code has been developed under NASA's Project Prometheus to model two grid ion optics systems. The code computes the flow of positive ions from the discharge chamber through the ion optics and into the beam downstream of the thruster. The rate at which beam ions interact with background neutral gas to form charge exchange ions is also computed. Charge exchange ion trajectories are computed to determine where they strike the ion optics grid surfaces and to determine the extent of sputter erosion they cause. The code has been used to compute predictions of the erosion pattern and wear rate on the NSTAR ion optics system; the code predicts the shape of the eroded pattern but overestimates the initial wear rate by about 50%. An example of use of the code to estimate the NEXIS thruster accelerator grid life is also presented.

  6. Development of a 3-D upwind PNS code for chemically reacting hypersonic flowfields

    NASA Technical Reports Server (NTRS)

    Tannehill, J. C.; Wadawadigi, G.

    1992-01-01

    Two new parabolized Navier-Stokes (PNS) codes were developed to compute the three-dimensional, viscous, chemically reacting flow of air around hypersonic vehicles such as the National Aero-Space Plane (NASP). The first code (TONIC) solves the gas dynamic and species conservation equations in a fully coupled manner using an implicit, approximately-factored, central-difference algorithm. This code was upgraded to include shock fitting and the capability of computing the flow around complex body shapes. The revised TONIC code was validated by computing the chemically-reacting (M(sub infinity) = 25.3) flow around a 10 deg half-angle cone at various angles of attack and the Ames All-Body model at 0 deg angle of attack. The results of these calculations were in good agreement with the results from the UPS code. One of the major drawbacks of the TONIC code is that the central-differencing of fluxes across interior flowfield discontinuities tends to introduce errors into the solution in the form of local flow property oscillations. The second code (UPS), originally developed for a perfect gas, has been extended to permit either perfect gas, equilibrium air, or nonequilibrium air computations. The code solves the PNS equations using a finite-volume, upwind TVD method based on Roe's approximate Riemann solver that was modified to account for real gas effects. The dissipation term associated with this algorithm is sufficiently adaptive to flow conditions that, even when attempting to capture very strong shock waves, no additional smoothing is required. For nonequilibrium calculations, the code solves the fluid dynamic and species continuity equations in a loosely-coupled manner. This code was used to calculate the hypersonic, laminar flow of chemically reacting air over cones at various angles of attack. In addition, the flow around the McDonnel Douglas generic option blended-wing-body was computed and comparisons were made between the perfect gas, equilibrium air, and the

  7. A 3D-CFD code for accurate prediction of fluid flows and fluid forces in seals

    NASA Technical Reports Server (NTRS)

    Athavale, M. M.; Przekwas, A. J.; Hendricks, R. C.

    1994-01-01

    Current and future turbomachinery requires advanced seal configurations to control leakage, inhibit mixing of incompatible fluids and to control the rotodynamic response. In recognition of a deficiency in the existing predictive methodology for seals, a seven year effort was established in 1990 by NASA's Office of Aeronautics Exploration and Technology, under the Earth-to-Orbit Propulsion program, to develop validated Computational Fluid Dynamics (CFD) concepts, codes and analyses for seals. The effort will provide NASA and the U.S. Aerospace Industry with advanced CFD scientific codes and industrial codes for analyzing and designing turbomachinery seals. An advanced 3D CFD cylindrical seal code has been developed, incorporating state-of-the-art computational methodology for flow analysis in straight, tapered and stepped seals. Relevant computational features of the code include: stationary/rotating coordinates, cylindrical and general Body Fitted Coordinates (BFC) systems, high order differencing schemes, colocated variable arrangement, advanced turbulence models, incompressible/compressible flows, and moving grids. This paper presents the current status of code development, code demonstration for predicting rotordynamic coefficients, numerical parametric study of entrance loss coefficients for generic annular seals, and plans for code extensions to labyrinth, damping, and other seal configurations.

  8. Perceptual video quality assessment in H.264 video coding standard using objective modeling.

    PubMed

    Karthikeyan, Ramasamy; Sainarayanan, Gopalakrishnan; Deepa, Subramaniam Nachimuthu

    2014-01-01

    Since usage of digital video is wide spread nowadays, quality considerations have become essential, and industry demand for video quality measurement is rising. This proposal provides a method of perceptual quality assessment in H.264 standard encoder using objective modeling. For this purpose, quality impairments are calculated and a model is developed to compute the perceptual video quality metric based on no reference method. Because of the shuttle difference between the original video and the encoded video the quality of the encoded picture gets degraded, this quality difference is introduced by the encoding process like Intra and Inter prediction. The proposed model takes into account of the artifacts introduced by these spatial and temporal activities in the hybrid block based coding methods and an objective modeling of these artifacts into subjective quality estimation is proposed. The proposed model calculates the objective quality metric using subjective impairments; blockiness, blur and jerkiness compared to the existing bitrate only calculation defined in the ITU G 1070 model. The accuracy of the proposed perceptual video quality metrics is compared against popular full reference objective methods as defined by VQEG.

  9. Development of a Top-View Numeric Coding Teaching-Learning Trajectory within an Elementary Grades 3-D Visualization Design Research Project

    ERIC Educational Resources Information Center

    Sack, Jacqueline J.

    2013-01-01

    This article explicates the development of top-view numeric coding of 3-D cube structures within a design research project focused on 3-D visualization skills for elementary grades children. It describes children's conceptual development of 3-D cube structures using concrete models, conventional 2-D pictures and abstract top-view numeric…

  10. Users manual for CAFE-3D : a computational fluid dynamics fire code.

    SciTech Connect

    Khalil, Imane; Lopez, Carlos; Suo-Anttila, Ahti Jorma

    2005-03-01

    The Container Analysis Fire Environment (CAFE) computer code has been developed to model all relevant fire physics for predicting the thermal response of massive objects engulfed in large fires. It provides realistic fire thermal boundary conditions for use in design of radioactive material packages and in risk-based transportation studies. The CAFE code can be coupled to commercial finite-element codes such as MSC PATRAN/THERMAL and ANSYS. This coupled system of codes can be used to determine the internal thermal response of finite element models of packages to a range of fire environments. This document is a user manual describing how to use the three-dimensional version of CAFE, as well as a description of CAFE input and output parameters. Since this is a user manual, only a brief theoretical description of the equations and physical models is included.

  11. SPACE CHARGE DYNAMICS SIMULATED IN 3 - D IN THE CODE ORBIT.

    SciTech Connect

    LUCCIO,A.U.; DIMPERIO,N.L.; BEEBE - WANG,J.

    2002-06-02

    Several improvements have been done on space charge calculations in the PIC code ORBIT, specialized for high intensity circular hadron accelerators. We present results of different Poisson solvers in the presence of conductive walls.

  12. Version 3.0 of code Java for 3D simulation of the CCA model

    NASA Astrophysics Data System (ADS)

    Zhang, Kebo; Zuo, Junsen; Dou, Yifeng; Li, Chao; Xiong, Hailing

    2016-10-01

    In this paper we provide a new version of program for replacing the previous version. The frequency of traversing the clusters-list was reduced, and some code blocks were optimized properly; in addition, we appended and revised the comments of the source code for some methods or attributes. The compared experimental results show that new version has better time efficiency than the previous version.

  13. Video coding for next-generation surveillance systems

    NASA Astrophysics Data System (ADS)

    Klasen, Lena M.; Fahlander, Olov

    1997-02-01

    Video is used as recording media in surveillance system and also more frequently by the Swedish Police Force. Methods for analyzing video using an image processing system have recently been introduced at the Swedish National Laboratory of Forensic Science, and new methods are in focus in a research project at Linkoping University, Image Coding Group. The accuracy of the result of those forensic investigations often depends on the quality of the video recordings, and one of the major problems when analyzing videos from crime scenes is the poor quality of the recordings. Enhancing poor image quality might add manipulative or subjective effects and does not seem to be the right way of getting reliable analysis results. The surveillance system in use today is mainly based on video techniques, VHS or S-VHS, and the weakest link is the video cassette recorder, (VCR). Multiplexers for selecting one of many camera outputs for recording is another problem as it often filters the video signal, and recording is limited to only one of the available cameras connected to the VCR. A way to get around the problem of poor recording is to simultaneously record all camera outputs digitally. It is also very important to build such a system bearing in mind that image processing analysis methods becomes more important as a complement to the human eye. Using one or more cameras gives a large amount of data, and the need for data compression is more than obvious. Crime scenes often involve persons or moving objects, and the available coding techniques are more or less useful. Our goal is to propose a possible system, being the best compromise with respect to what needs to be recorded, movements in the recorded scene, loss of information and resolution etc., to secure the efficient recording of the crime and enable forensic analysis. The preventative effective of having a well functioning surveillance system and well established image analysis methods is not to be neglected. Aspects of

  14. PORTA: A Massively Parallel Code for 3D Non-LTE Polarized Radiative Transfer

    NASA Astrophysics Data System (ADS)

    Štěpán, J.

    2014-10-01

    The interpretation of the Stokes profiles of the solar (stellar) spectral line radiation requires solving a non-LTE radiative transfer problem that can be very complex, especially when the main interest lies in modeling the linear polarization signals produced by scattering processes and their modification by the Hanle effect. One of the main difficulties is due to the fact that the plasma of a stellar atmosphere can be highly inhomogeneous and dynamic, which implies the need to solve the non-equilibrium problem of generation and transfer of polarized radiation in realistic three-dimensional stellar atmospheric models. Here we present PORTA, a computer program we have developed for solving, in three-dimensional (3D) models of stellar atmospheres, the problem of the generation and transfer of spectral line polarization taking into account anisotropic radiation pumping and the Hanle and Zeeman effects in multilevel atoms. The numerical method of solution is based on a highly convergent iterative algorithm, whose convergence rate is insensitive to the grid size, and on an accurate short-characteristics formal solver of the Stokes-vector transfer equation which uses monotonic Bezier interpolation. In addition to the iterative method and the 3D formal solver, another important feature of PORTA is a novel parallelization strategy suitable for taking advantage of massively parallel computers. Linear scaling of the solution with the number of processors allows to reduce the solution time by several orders of magnitude. We present useful benchmarks and a few illustrations of applications using a 3D model of the solar chromosphere resulting from MHD simulations. Finally, we present our conclusions with a view to future research. For more details see Štěpán & Trujillo Bueno (2013).

  15. Comparison of a 3-D GPU-Assisted Maxwell Code and Ray Tracing for Reflectometry on ITER

    NASA Astrophysics Data System (ADS)

    Gady, Sarah; Kubota, Shigeyuki; Johnson, Irena

    2015-11-01

    Electromagnetic wave propagation and scattering in magnetized plasmas are important diagnostics for high temperature plasmas. 1-D and 2-D full-wave codes are standard tools for measurements of the electron density profile and fluctuations; however, ray tracing results have shown that beam propagation in tokamak plasmas is inherently a 3-D problem. The GPU-Assisted Maxwell Code utilizes the FDTD (Finite-Difference Time-Domain) method for solving the Maxwell equations with the cold plasma approximation in a 3-D geometry. Parallel processing with GPGPU (General-Purpose computing on Graphics Processing Units) is used to accelerate the computation. Previously, we reported on initial comparisons of the code results to 1-D numerical and analytical solutions, where the size of the computational grid was limited by the on-board memory of the GPU. In the current study, this limitation is overcome by using domain decomposition and an additional GPU. As a practical application, this code is used to study the current design of the ITER Low Field Side Reflectometer (LSFR) for the Equatorial Port Plug 11 (EPP11). A detailed examination of Gaussian beam propagation in the ITER edge plasma will be presented, as well as comparisons with ray tracing. This work was made possible by funding from the Department of Energy for the Summer Undergraduate Laboratory Internship (SULI) program. This work is supported by the US DOE Contract No.DE-AC02-09CH11466 and DE-FG02-99-ER54527.

  16. Three-dimensional inelastic analysis for hot section components, BEST 3D code

    NASA Technical Reports Server (NTRS)

    Wilson, Raymond B.; Banerjee, Prasanta K.

    1987-01-01

    The goal is the development of an alternative stress analysis tool, distinct from the finite element method, applicable to the engineering analysis of gas turbine engine structures. The boundary element method was selected for this development effort on the basis of its already demonstrated applicability to a variety of geometries and problem types characteristic of gas turbine engine components. Major features of the BEST3D computer program are described, and some of the significant developments carried out as part of the Inelastic Methods Contract are outlined.

  17. Block-based embedded color image and video coding

    NASA Astrophysics Data System (ADS)

    Nagaraj, Nithin; Pearlman, William A.; Islam, Asad

    2004-01-01

    Set Partitioned Embedded bloCK coder (SPECK) has been found to perform comparable to the best-known still grayscale image coders like EZW, SPIHT, JPEG2000 etc. In this paper, we first propose Color-SPECK (CSPECK), a natural extension of SPECK to handle color still images in the YUV 4:2:0 format. Extensions to other YUV formats are also possible. PSNR results indicate that CSPECK is among the best known color coders while the perceptual quality of reconstruction is superior than SPIHT and JPEG2000. We then propose a moving picture based coding system called Motion-SPECK with CSPECK as the core algorithm in an intra-based setting. Specifically, we demonstrate two modes of operation of Motion-SPECK, namely the constant-rate mode where every frame is coded at the same bit-rate and the constant-distortion mode, where we ensure the same quality for each frame. Results on well-known CIF sequences indicate that Motion-SPECK performs comparable to Motion-JPEG2000 while the visual quality of the sequence is in general superior. Both CSPECK and Motion-SPECK automatically inherit all the desirable features of SPECK such as embeddedness, low computational complexity, highly efficient performance, fast decoding and low dynamic memory requirements. The intended applications of Motion-SPECK would be high-end and emerging video applications such as High Quality Digital Video Recording System, Internet Video, Medical Imaging etc.

  18. Validation and Comparison of 2D and 3D Codes for Nearshore Motion of Long Waves Using Benchmark Problems

    NASA Astrophysics Data System (ADS)

    Velioǧlu, Deniz; Cevdet Yalçıner, Ahmet; Zaytsev, Andrey

    2016-04-01

    Tsunamis are huge waves with long wave periods and wave lengths that can cause great devastation and loss of life when they strike a coast. The interest in experimental and numerical modeling of tsunami propagation and inundation increased considerably after the 2011 Great East Japan earthquake. In this study, two numerical codes, FLOW 3D and NAMI DANCE, that analyze tsunami propagation and inundation patterns are considered. Flow 3D simulates linear and nonlinear propagating surface waves as well as long waves by solving three-dimensional Navier-Stokes (3D-NS) equations. NAMI DANCE uses finite difference computational method to solve 2D depth-averaged linear and nonlinear forms of shallow water equations (NSWE) in long wave problems, specifically tsunamis. In order to validate these two codes and analyze the differences between 3D-NS and 2D depth-averaged NSWE equations, two benchmark problems are applied. One benchmark problem investigates the runup of long waves over a complex 3D beach. The experimental setup is a 1:400 scale model of Monai Valley located on the west coast of Okushiri Island, Japan. Other benchmark problem is discussed in 2015 National Tsunami Hazard Mitigation Program (NTHMP) Annual meeting in Portland, USA. It is a field dataset, recording the Japan 2011 tsunami in Hilo Harbor, Hawaii. The computed water surface elevation and velocity data are compared with the measured data. The comparisons showed that both codes are in fairly good agreement with each other and benchmark data. The differences between 3D-NS and 2D depth-averaged NSWE equations are highlighted. All results are presented with discussions and comparisons. Acknowledgements: Partial support by Japan-Turkey Joint Research Project by JICA on earthquakes and tsunamis in Marmara Region (JICA SATREPS - MarDiM Project), 603839 ASTARTE Project of EU, UDAP-C-12-14 project of AFAD Turkey, 108Y227, 113M556 and 213M534 projects of TUBITAK Turkey, RAPSODI (CONCERT_Dis-021) of CONCERT

  19. 3-D kinetics simulations of the NRU reactor using the DONJON code

    SciTech Connect

    Leung, T. C.; Atfield, M. D.; Koclas, J.

    2006-07-01

    The NRU reactor is highly heterogeneous, heavy-water cooled and moderated, with online refuelling capability. It is licensed to operate at a maximum power of 135 MW, with a peak thermal flux of approximately 4.0 x 10{sup 18} n.m{sup -2} . s{sup -1}. In support of the safe operation of NRU, three-dimensional kinetics calculations for reactor transients have been performed using the DONJON code. The code was initially designed to perform space-time kinetics calculations for the CANDU{sup R} power reactors. This paper describes how the DONJON code can be applied to perform neutronic simulations for the analysis of reactor transients in NRU, and presents calculation results for some transients. (authors)

  20. Fast coding unit selection method for high efficiency video coding intra prediction

    NASA Astrophysics Data System (ADS)

    Xiong, Jian

    2013-07-01

    The high efficiency video coding (HEVC) video coding standard under development can achieve higher compression performance than previous standards, such as MPEG-4, H.263, and H.264/AVC. To improve coding performance, a quad-tree coding structure and a robust rate-distortion (RD) optimization technique is used to select an optimum coding mode. Since the RD costs of all possible coding modes are computed to decide an optimum mode, high computational complexity is induced in the encoder. A fast learning-based coding unit (CU) size selection method is presented for HEVC intra prediction. The proposed algorithm is based on theoretical analysis that shows the non-normalized histogram of oriented gradient (n-HOG) can be used to help select CU size. A codebook is constructed offline by clustering n-HOGs of training sequences for each CU size. The optimum size is determined by comparing the n-HOG of the current CU with the learned codebooks. Experimental results show that the CU size selection scheme speeds up intra coding significantly with negligible loss of peak signal-to-noise ratio.

  1. Numerical simulation of jet aerodynamics using the three-dimensional Navier-Stokes code PAB3D

    NASA Technical Reports Server (NTRS)

    Pao, S. Paul; Abdol-Hamid, Khaled S.

    1996-01-01

    This report presents a unified method for subsonic and supersonic jet analysis using the three-dimensional Navier-Stokes code PAB3D. The Navier-Stokes code was used to obtain solutions for axisymmetric jets with on-design operating conditions at Mach numbers ranging from 0.6 to 3.0, supersonic jets containing weak shocks and Mach disks, and supersonic jets with nonaxisymmetric nozzle exit geometries. This report discusses computational methods, code implementation, computed results, and comparisons with available experimental data. Very good agreement is shown between the numerical solutions and available experimental data over a wide range of operating conditions. The Navier-Stokes method using the standard Jones-Launder two-equation kappa-epsilon turbulence model can accurately predict jet flow, and such predictions are made without any modification to the published constants for the turbulence model.

  2. Application of ATHLET/DYN3D coupled codes system for fast liquid metal cooled reactor steady state simulation

    NASA Astrophysics Data System (ADS)

    Ivanov, V.; Samokhin, A.; Danicheva, I.; Khrennikov, N.; Bouscuet, J.; Velkov, K.; Pasichnyk, I.

    2017-01-01

    In this paper the approaches used for developing of the BN-800 reactor test model and for validation of coupled neutron-physic and thermohydraulic calculations are described. Coupled codes ATHLET 3.0 (code for thermohydraulic calculations of reactor transients) and DYN3D (3-dimensional code of neutron kinetics) are used for calculations. The main calculation results of reactor steady state condition are provided. 3-D model used for neutron calculations was developed for start reactor BN-800 load. The homogeneous approach is used for description of reactor assemblies. Along with main simplifications, the main reactor BN-800 core zones are described (LEZ, MEZ, HEZ, MOX, blankets). The 3D neutron physics calculations were provided with 28-group library, which is based on estimated nuclear data ENDF/B-7.0. Neutron SCALE code was used for preparation of group constants. Nodalization hydraulic model has boundary conditions by coolant mass-flow rate for core inlet part, by pressure and enthalpy for core outlet part, which can be chosen depending on reactor state. Core inlet and outlet temperatures were chosen according to reactor nominal state. The coolant mass flow rate profiling through the core is based on reactor power distribution. The test thermohydraulic calculations made with using of developed model showed acceptable results in coolant mass flow rate distribution through the reactor core and in axial temperature and pressure distribution. The developed model will be upgraded in future for different transient analysis in metal-cooled fast reactors of BN type including reactivity transients (control rods withdrawal, stop of the main circulation pump, etc.).

  3. Acoustic Scattering by Three-Dimensional Stators and Rotors Using the SOURCE3D Code. Volume 1; Analysis and Results

    NASA Technical Reports Server (NTRS)

    Meyer, Harold D.

    1999-01-01

    This report provides a study of rotor and stator scattering using the SOURCE3D Rotor Wake/Stator Interaction Code. SOURCE3D is a quasi-three-dimensional computer program that uses three-dimensional acoustics and two-dimensional cascade load response theory to calculate rotor and stator modal reflection and transmission (scattering) coefficients. SOURCE3D is at the core of the TFaNS (Theoretical Fan Noise Design/Prediction System), developed for NASA, which provides complete fully coupled (inlet, rotor, stator, exit) noise solutions for turbofan engines. The reason for studying scattering is that we must first understand the behavior of the individual scattering coefficients provided by SOURCE3D, before eventually understanding the more complicated predictions from TFaNS. To study scattering, we have derived a large number of scattering curves for vane and blade rows. The curves are plots of output wave power divided by input wave power (in dB units) versus vane/blade ratio. Some of these plots are shown in this report. All of the plots are provided in a separate volume. To assist in understanding the plots, formulas have been derived for special vane/blade ratios for which wavefronts are either parallel or normal to rotor or stator chords. From the plots, we have found that, for the most part, there was strong transmission and weak reflection over most of the vane/blade ratio range for the stator. For the rotor, there was little transmission loss.

  4. Dependent video coding using a tree representation of pixel dependencies

    NASA Astrophysics Data System (ADS)

    Amati, Luca; Valenzise, Giuseppe; Ortega, Antonio; Tubaro, Stefano

    2011-09-01

    Motion-compensated prediction induces a chain of coding dependencies between pixels in video. In principle, an optimal selection of encoding parameters (motion vectors, quantization parameters, coding modes) should take into account the whole temporal horizon of a GOP. However, in practical coding schemes, these choices are made on a frame-by-frame basis, thus with a possible loss of performance. In this paper we describe a tree-based model for pixelwise coding dependencies: each pixel in a frame is the child of a pixel in a previous reference frame. We show that some tree structures are more favorable than others from a rate-distortion perspective, e.g., because they entail a large descendance of pixels which are well predicted from a common ancestor. In those cases, a higher quality has to be assigned to pixels at the top of such trees. We promote the creation of these structures by adding a special discount term to the conventional Lagrangian cost adopted at the encoder. The proposed model can be implemented through a double-pass encoding procedure. Specifically, we devise heuristic cost functions to drive the selection of quantization parameters and of motion vectors, which can be readily implemented into a state-of-the-art H.264/AVC encoder. Our experiments demonstrate that coding efficiency is improved for video sequences with low motion, while there are no apparent gains for more complex motion. We argue that this is due to both the presence of complex encoder features not captured by the model, and to the complexity of the source to be encoded.

  5. Applications of the 3-D Deterministic Transport Code Attlla for Core Safety Analysis

    SciTech Connect

    D. S. Lucas

    2004-10-01

    An LDRD (Laboratory Directed Research and Development) project is ongoing at the Idaho National Engineering and Environmental Laboratory (INEEL) for applying the three-dimensional multi-group deterministic neutron transport code (Attila®) to criticality, flux and depletion calculations of the Advanced Test Reactor (ATR). This paper discusses the model development, capabilities of Attila, generation of the cross-section libraries, and comparisons to an ATR MCNP model and future.

  6. Dual Cauchy rate-distortion model for video coding

    NASA Astrophysics Data System (ADS)

    Zeng, Huanqiang; Chen, Jing; Cai, Canhui

    2014-07-01

    A dual Cauchy rate-distortion model is proposed for video coding. In our approach, the coefficient distribution of the integer transform is first studied. Then, based on the observation that the rate-distortion model of the luminance and that of the chrominance can be well expressed by separate Cauchy functions, a dual Cauchy rate-distortion model is presented. Furthermore, the simplified rate-distortion formulas are deduced to reduce the computational complexity of the proposed model without losing the accuracy. Experimental results have shown that the proposed model is better able to approximate the actual rate-distortion curve for various sequences with different motion activities.

  7. Drug-laden 3D biodegradable label using QR code for anti-counterfeiting of drugs.

    PubMed

    Fei, Jie; Liu, Ran

    2016-06-01

    Wiping out counterfeit drugs is a great task for public health care around the world. The boost of these drugs makes treatment to become potentially harmful or even lethal. In this paper, biodegradable drug-laden QR code label for anti-counterfeiting of drugs is proposed that can provide the non-fluorescence recognition and high capacity. It is fabricated by the laser cutting to achieve the roughness over different surface which causes the difference in the gray levels on the translucent material the QR code pattern, and the micro mold process to obtain the drug-laden biodegradable label. We screened biomaterials presenting the relevant conditions and further requirements of the package. The drug-laden microlabel is on the surface of the troches or the bottom of the capsule and can be read by a simple smartphone QR code reader application. Labeling the pill directly and decoding the information successfully means more convenient and simple operation with non-fluorescence and high capacity in contrast to the traditional methods.

  8. Perceptual coding of stereo endoscopy video for minimally invasive surgery

    NASA Astrophysics Data System (ADS)

    Bartoli, Guido; Menegaz, Gloria; Yang, Guang Zhong

    2007-03-01

    In this paper, we propose a compression scheme that is tailored for stereo-laparoscope sequences. The inter-frame correlation is modeled by the deformation field obtained by elastic registration between two subsequent frames and exploited for prediction of the left sequence. The right sequence is lossy encoded by prediction from the corresponding left images. Wavelet-based coding is applied to both the deformation vector fields and residual images. The resulting system supports spatio temporal scalability, while providing lossless performance. The implementation of the wavelet transform by integer lifting ensures a low computational complexity, thus reducing the required run-time memory allocation and on line implementation. Extensive psychovisual tests were performed for system validation and characterization with respect to the MPEG4 standard for video coding. Results are very encouraging: the PSVC system features the functionalities making it suitable for PACS while providing a good trade-off between usability and performance in lossy mode.

  9. Next generation video coding for mobile applications: industry requirements and technologies

    NASA Astrophysics Data System (ADS)

    Budagavi, Madhukar; Zhou, Minhua

    2007-01-01

    Handheld battery-operated consumer electronics devices such as camera phones, digital still cameras, digital camcorders, and personal media players have become very popular in recent years. Video codecs are extensively used in these devices for video capture and/or playback. The annual shipment of such devices already exceeds a hundred million units and is growing, which makes mobile battery-operated video device requirements very important to focus in video coding research and development. This paper highlights the following unique set of requirements for video coding for these applications: low power consumption, high video quality at low complexity, and low cost, and motivates the need for a new video coding standard that enables better trade-offs of power consumption, complexity, and coding efficiency to meet the challenging requirements of portable video devices. This paper also provides a brief overview of some of the video coding technologies being presented in the ITU-T Video Coding Experts Group (VCEG) standardization body for computational complexity reduction and for coding efficiency improvement in a future video coding standard.

  10. Fast motion prediction algorithm for multiview video coding

    NASA Astrophysics Data System (ADS)

    Abdelazim, Abdelrahman; Zhang, Guang Y.; Mein, Stephen J.; Varley, Martin R.; Ait-Boudaoud, Djamel

    2011-06-01

    Multiview Video Coding (MVC) is an extension to the H.264/MPEG-4 AVC video compression standard developed with joint efforts by MPEG/VCEG to enable efficient encoding of sequences captured simultaneously from multiple cameras using a single video stream. Therefore the design is aimed at exploiting inter-view dependencies in addition to reducing temporal redundancies. However, this further increases the overall encoding complexity In this paper, the high correlation between a macroblock and its enclosed partitions is utilised to estimate motion homogeneity, and based on the result inter-view prediction is selectively enabled or disabled. Moreover, if the MVC is divided into three layers in terms of motion prediction; the first being the full and sub-pixel motion search, the second being the mode selection process and the third being repetition of the first and second for inter-view prediction, the proposed algorithm significantly reduces the complexity in the three layers. To assess the proposed algorithm, a comprehensive set of experiments were conducted. The results show that the proposed algorithm significantly reduces the motion estimation time whilst maintaining similar Rate Distortion performance, when compared to both the H.264/MVC reference software and recently reported work.

  11. A treatment planning code for inverse planning and 3D optimization in hadrontherapy.

    PubMed

    Bourhaleb, F; Marchetto, F; Attili, A; Pittà, G; Cirio, R; Donetti, M; Giordanengo, S; Givehchi, N; Iliescu, S; Krengli, M; La Rosa, A; Massai, D; Pecka, A; Pardo, J; Peroni, C

    2008-09-01

    The therapeutic use of protons and ions, especially carbon ions, is a new technique and a challenge to conform the dose to the target due to the energy deposition characteristics of hadron beams. An appropriate treatment planning system (TPS) is strictly necessary to take full advantage. We developed a TPS software, ANCOD++, for the evaluation of the optimal conformal dose. ANCOD++ is an analytical code using the voxel-scan technique as an active method to deliver the dose to the patient, and provides treatment plans with both proton and carbon ion beams. The iterative algorithm, coded in C++ and running on Unix/Linux platform, allows the determination of the best fluences of the individual beams to obtain an optimal physical dose distribution, delivering a maximum dose to the target volume and a minimum dose to critical structures. The TPS is supported by Monte Carlo simulations with the package GEANT3 to provide the necessary physical lookup tables and verify the optimized treatment plans. Dose verifications done by means of full Monte Carlo simulations show an overall good agreement with the treatment planning calculations. We stress the fact that the purpose of this work is the verification of the physical dose and a next work will be dedicated to the radiobiological evaluation of the equivalent biological dose.

  12. FOI-PERFECT code: 3D relaxation MHD modeling and Applications

    NASA Astrophysics Data System (ADS)

    Wang, Gang-Hua; Duan, Shu-Chao; Comutational Physics Team Team

    2016-10-01

    One of the challenges in numerical simulations of electromagnetically driven high energy density (HED) systems is the existence of vacuum region. FOI-PERFECT code adopts a full relaxation magnetohydrodynamic (MHD) model. The electromagnetic part of the conventional model adopts the magnetic diffusion approximation. The vacuum region is approximated by artificially increasing the resistivity. On one hand the phase/group velocity is superluminal and hence non-physical in the vacuum region, on the other hand a diffusion equation with large diffusion coefficient can only be solved by implicit scheme which is difficult to be parallelized and converge. A better alternative is to solve the full electromagnetic equations. Maxwell's equations coupled with the constitutive equation, generalized Ohm's law, constitute a relaxation model. The dispersion relation is given to show its transition from electromagnetic propagation in vacuum to resistive MHD in plasma in a natural way. The phase and group velocities are finite for this system. A better time stepping is adopted to give a 3rd full order convergence in time domain without the stiff relaxation term restriction. Therefore it is convenient for explicit & parallel computations. Some numerical results of FOI-PERFECT code are also given. Project supported by the National Natural Science Foundation of China (Grant No. 11571293) And Foundation of China Academy of Engineering Physics (Grant No. 2015B0201023).

  13. Embedded morphological dilation coding for 2D and 3D images

    NASA Astrophysics Data System (ADS)

    Lazzaroni, Fabio; Signoroni, Alberto; Leonardi, Riccardo

    2002-01-01

    Current wavelet-based image coders obtain high performance thanks to the identification and the exploitation of the statistical properties of natural images in the transformed domain. Zerotree-based algorithms, as Embedded Zerotree Wavelets (EZW) and Set Partitioning In Hierarchical Trees (SPIHT), offer high Rate-Distortion (RD) coding performance and low computational complexity by exploiting statistical dependencies among insignificant coefficients on hierarchical subband structures. Another possible approach tries to predict the clusters of significant coefficients by means of some form of morphological dilation. An example of a morphology-based coder is the Significance-Linked Connected Component Analysis (SLCCA) that has shown performance which are comparable to the zerotree-based coders but is not embedded. A new embedded bit-plane coder is proposed here based on morphological dilation of significant coefficients and context based arithmetic coding. The algorithm is able to exploit both intra-band and inter-band statistical dependencies among wavelet significant coefficients. Moreover, the same approach is used both for two and three-dimensional wavelet-based image compression. Finally we the algorithms are tested on some 2D images and on a medical volume, by comparing the RD results to those obtained with the state-of-the-art wavelet-based coders.

  14. An Adaptive Motion Estimation Scheme for Video Coding

    PubMed Central

    Gao, Yuan; Jia, Kebin

    2014-01-01

    The unsymmetrical-cross multihexagon-grid search (UMHexagonS) is one of the best fast Motion Estimation (ME) algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV) distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised. PMID:24672313

  15. Pattern-based video coding with dynamic background modeling

    NASA Astrophysics Data System (ADS)

    Paul, Manoranjan; Lin, Weisi; Lau, Chiew Tong; Lee, Bu-Sung

    2013-12-01

    The existing video coding standard H.264 could not provide expected rate-distortion (RD) performance for macroblocks (MBs) with both moving objects and static background and the MBs with uncovered background (previously occluded). The pattern-based video coding (PVC) technique partially addresses the first problem by separating and encoding moving area and skipping background area at block level using binary pattern templates. However, the existing PVC schemes could not outperform the H.264 with significant margin at high bit rates due to the least number of MBs classified using the pattern mode. Moreover, both H.264 and the PVC scheme could not provide the expected RD performance for the uncovered background areas due to the unavailability of the reference areas in the existing approaches. In this paper, we propose a new PVC technique which will use the most common frame in a scene (McFIS) as a reference frame to overcome the problems. Apart from the use of McFIS as a reference frame, we also introduce a content-dependent pattern generation strategy for better RD performance. The experimental results confirm the superiority of the proposed schemes in comparison with the existing PVC and the McFIS-based methods by achieving significant image quality gain at a wide range of bit rates.

  16. Code and Solution Verification of 3D Numerical Modeling of Flow in the Gust Erosion Chamber

    NASA Astrophysics Data System (ADS)

    Yuen, A.; Bombardelli, F. A.

    2014-12-01

    Erosion microcosms are devices commonly used to investigate the erosion and transport characteristics of sediments at the bed of rivers, lakes, or estuaries. In order to understand the results these devices provide, the bed shear stress and flow field need to be accurately described. In this research, the UMCES Gust Erosion Microcosm System (U-GEMS) is numerically modeled using Finite Volume Method. The primary aims are to simulate the bed shear stress distribution at the surface of the sediment core/bottom of the microcosm, and to validate the U-GEMS produces uniform bed shear stress at the bottom of the microcosm. The mathematical model equations are solved by on a Cartesian non-uniform grid. Multiple numerical runs were developed with different input conditions and configurations. Prior to developing the U-GEMS model, the General Moving Objects (GMO) model and different momentum algorithms in the code were verified. Code verification of these solvers was done via simulating the flow inside the top wall driven square cavity on different mesh sizes to obtain order of convergence. The GMO model was used to simulate the top wall in the top wall driven square cavity as well as the rotating disk in the U-GEMS. Components simulated with the GMO model were rigid bodies that could have any type of motion. In addition cross-verification was conducted as results were compared with numerical results by Ghia et al. (1982), and good agreement was found. Next, CFD results were validated by simulating the flow within the conventional microcosm system without suction and injection. Good agreement was found when the experimental results by Khalili et al. (2008) were compared. After the ability of the CFD solver was proved through the above code verification steps. The model was utilized to simulate the U-GEMS. The solution was verified via classic mesh convergence study on four consecutive mesh sizes, in addition to that Grid Convergence Index (GCI) was calculated and based on

  17. Validation of a Node-Centered Wall Function Model for the Unstructured Flow Code FUN3D

    NASA Technical Reports Server (NTRS)

    Carlson, Jan-Renee; Vasta, Veer N.; White, Jeffery

    2015-01-01

    In this paper, the implementation of two wall function models in the Reynolds averaged Navier-Stokes (RANS) computational uid dynamics (CFD) code FUN3D is described. FUN3D is a node centered method for solving the three-dimensional Navier-Stokes equations on unstructured computational grids. The first wall function model, based on the work of Knopp et al., is used in conjunction with the one-equation turbulence model of Spalart-Allmaras. The second wall function model, also based on the work of Knopp, is used in conjunction with the two-equation k-! turbulence model of Menter. The wall function models compute the wall momentum and energy flux, which are used to weakly enforce the wall velocity and pressure flux boundary conditions in the mean flow momentum and energy equations. These wall conditions are implemented in an implicit form where the contribution of the wall function model to the Jacobian are also included. The boundary conditions of the turbulence transport equations are enforced explicitly (strongly) on all solid boundaries. The use of the wall function models is demonstrated on four test cases: a at plate boundary layer, a subsonic di user, a 2D airfoil, and a 3D semi-span wing. Where possible, different near-wall viscous spacing tactics are examined. Iterative residual convergence was obtained in most cases. Solution results are compared with theoretical and experimental data for several variations of grid spacing. In general, very good comparisons with data were achieved.

  18. Predictions of bubbly flows in vertical pipes using two-fluid models in CFDS-FLOW3D code

    SciTech Connect

    Banas, A.O.; Carver, M.B.; Unrau, D.

    1995-09-01

    This paper reports the results of a preliminary study exploring the performance of two sets of two-fluid closure relationships applied to the simulation of turbulent air-water bubbly upflows through vertical pipes. Predictions obtained with the default CFDS-FLOW3D model for dispersed flows were compared with the predictions of a new model (based on the work of Lee), and with the experimental data of Liu. The new model, implemented in the CFDS-FLOW3D code, included additional source terms in the {open_quotes}standard{close_quotes} {kappa}-{epsilon} transport equations for the liquid phase, as well as modified model coefficients and wall functions. All simulations were carried out in a 2-D axisymmetric format, collapsing the general multifluid framework of CFDS-FLOW3D to the two-fluid (air-water) case. The newly implemented model consistently improved predictions of radial-velocity profiles of both phases, but failed to accurately reproduce the experimental phase-distribution data. This shortcoming was traced to the neglect of anisotropic effects in the modelling of liquid-phase turbulence. In this sense, the present investigation should be considered as the first step toward the ultimate goal of developing a theoretically sound and universal CFD-type two-fluid model for bubbly flows in channels.

  19. CFD Code Calibration and Inlet-Fairing Effects On a 3D Hypersonic Powered-Simulation Model

    NASA Technical Reports Server (NTRS)

    Huebner, Lawrence D.; Tatum, Kenneth E.

    1993-01-01

    A three-dimensional (3D) computational study has been performed addressing issues related to the wind tunnel testing of a hypersonic powered-simulation model. The study consisted of three objectives. The first objective was to calibrate a state-of-the-art computational fluid dynamics (CFD) code in its ability to predict hypersonic powered-simulation flows by comparing CFD solutions with experimental surface pressure data. Aftbody lower surface pressures were well predicted, but lower surface wing pressures were less accurately predicted. The second objective was to determine the 3D effects on the aftbody created by fairing over the inlet; this was accomplished by comparing the CFD solutions of two closed-inlet powered configurations with a flowing- inlet powered configuration. Although results at four freestream Mach numbers indicate that the exhaust plume tends to isolate the aftbody surface from most forebody flow- field differences, a smooth inlet fairing provides the least aftbody force and moment variation compared to a flowing inlet. The final objective was to predict and understand the 3D characteristics of exhaust plume development at selected points on a representative flight path. Results showed a dramatic effect of plume expansion onto the wings as the freestream Mach number and corresponding nozzle pressure ratio are increased.

  20. DISCO: A 3D Moving-mesh Magnetohydrodynamics Code Designed for the Study of Astrophysical Disks

    NASA Astrophysics Data System (ADS)

    Duffell, Paul C.

    2016-09-01

    This work presents the publicly available moving-mesh magnetohydrodynamics (MHD) code DISCO. DISCO is efficient and accurate at evolving orbital fluid motion in two and three dimensions, especially at high Mach numbers. DISCO employs a moving-mesh approach utilizing a dynamic cylindrical mesh that can shear azimuthally to follow the orbital motion of the gas. The moving mesh removes diffusive advection errors and allows for longer time-steps than a static grid. MHD is implemented in DISCO using an HLLD Riemann solver and a novel constrained transport (CT) scheme that is compatible with the mesh motion. DISCO is tested against a wide variety of problems, which are designed to test its stability, accuracy, and scalability. In addition, several MHD tests are performed which demonstrate the accuracy and stability of the new CT approach, including two tests of the magneto-rotational instability, one testing the linear growth rate and the other following the instability into the fully turbulent regime.

  1. Extension of a three-dimensional viscous wing flow analysis user's manual: VISTA 3-D code

    NASA Technical Reports Server (NTRS)

    Weinberg, Bernard C.; Chen, Shyi-Yaung; Thoren, Stephen J.; Shamroth, Stephen J.

    1990-01-01

    Three-dimensional unsteady viscous effects can significantly influence the performance of fixed and rotary wing aircraft. These effects are important in both flows about helicopter rotors in forward flight and flows about three-dimensional (swept and tapered) supercritical wings. A computational procedure for calculating such flow field was developed. The procedure is based upon an alternating direction technique employing the Linearized Block Implicit method for solving three-dimensional viscous flow problems. In order to demonstrate the viability of this method, two- and three-dimensional problems are computed. These include the flow over a two-dimensional NACA 0012 airfoil under steady and oscillating conditions, and the steady, skewed, three-dimensional flow on a flat plate. Although actual three-dimensional flows over wings were not obtained, the ground work was laid for considering such flows. In this report a description of the computer code is given.

  2. Development of Scientific Simulation 3D Full Wave ICRF Code for Stellarators and Heating/CD Scenarios Development

    SciTech Connect

    Vdovin V.L.

    2005-08-15

    In this report we describe theory and 3D full wave code description for the wave excitation, propagation and absorption in 3-dimensional (3D) stellarator equilibrium high beta plasma in ion cyclotron frequency range (ICRF). This theory forms a basis for a 3D code creation, urgently needed for the ICRF heating scenarios development for the operated LHD, constructed W7-X, NCSX and projected CSX3 stellarators, as well for re evaluation of ICRF scenarios in operated tokamaks and in the ITER . The theory solves the 3D Maxwell-Vlasov antenna-plasma-conducting shell boundary value problem in the non-orthogonal flux coordinates ({Psi}, {theta}, {var_phi}), {Psi} being magnetic flux function, {theta} and {var_phi} being the poloidal and toroidal angles, respectively. All basic physics, like wave refraction, reflection and diffraction are self consistently included, along with the fundamental ion and ion minority cyclotron resonances, two ion hybrid resonance, electron Landau and TTMP absorption. Antenna reactive impedance and loading resistance are also calculated and urgently needed for an antenna -generator matching. This is accomplished in a real confining magnetic field being varying in a plasma major radius direction, in toroidal and poloidal directions, through making use of the hot dense plasma wave induced currents with account to the finite Larmor radius effects. We expand the solution in Fourier series over the toroidal ({var_phi}) and poloidal ({theta}) angles and solve resulting ordinary differential equations in a radial like {Psi}-coordinate by finite difference method. The constructed discretization scheme is divergent-free one, thus retaining the basic properties of original equations. The Fourier expansion over the angle coordinates has given to us the possibility to correctly construct the ''parallel'' wave number k{sub //}, and thereby to correctly describe the ICRF waves absorption by a hot plasma. The toroidal harmonics are tightly coupled with each

  3. New laser driver for physics modeling codes using unstructured 3d grids

    SciTech Connect

    Kaiser, T; Milovich, J L; Prasad, M K; Shestakov, A I

    1999-02-01

    We present a status report on the current state of development, testing and application of a new scheme for laser beam evolution and power deposition on three-dimensional unstructured grids. The scheme is being encapsulated in a C++ library for convenient porting to existing modeling codes. We have added a new ray propagator that is second order in time, allowing rays to refract within computational zones as well as at zone interfaces. In a globally constant free-electron density gradient on a randomized hexahedral mesh,the new integrator produces ray trajectories that agree with analytic results to within machine roundoff. A new method for computing the inverse-bremmstrahlung energy deposition rate that captures its highly non-uniform spatial dependence within a zone has also been added. This allows accurate trajectories without the necessity of sub-stepping in time. Other enhancements (not discussed) include multiple user-configurable beams, computation of the electron oscillation velocity in the laser electric field and energy-deposition accounting. Results of laser-driven simulations are presented in a companion paper.

  4. Coding tools investigation for next generation video coding based on HEVC

    NASA Astrophysics Data System (ADS)

    Chen, Jianle; Chen, Ying; Karczewicz, Marta; Li, Xiang; Liu, Hongbin; Zhang, Li; Zhao, Xin

    2015-09-01

    The new state-of-the-art video coding standard, H.265/HEVC, has been finalized in 2013 and it achieves roughly 50% bit rate saving compared to its predecessor, H.264/MPEG-4 AVC. This paper provides the evidence that there is still potential for further coding efficiency improvements. A brief overview of HEVC is firstly given in the paper. Then, our improvements on each main module of HEVC are presented. For instance, the recursive quadtree block structure is extended to support larger coding unit and transform unit. The motion information prediction scheme is improved by advanced temporal motion vector prediction, which inherits the motion information of each small block within a large block from a temporal reference picture. Cross component prediction with linear prediction model improves intra prediction and overlapped block motion compensation improves the efficiency of inter prediction. Furthermore, coding of both intra and inter prediction residual is improved by adaptive multiple transform technique. Finally, in addition to deblocking filter and SAO, adaptive loop filter is applied to further enhance the reconstructed picture quality. This paper describes above-mentioned techniques in detail and evaluates their coding performance benefits based on the common test condition during HEVC development. The simulation results show that significant performance improvement over HEVC standard can be achieved, especially for the high resolution video materials.

  5. A Fast Parallel Simulation Code for Interaction between Proto-Planetary Disk and Embedded Proto-Planets: Implementation for 3D Code

    SciTech Connect

    Li, Shengtai; Li, Hui

    2012-06-14

    We develop a 3D simulation code for interaction between the proto-planetary disk and embedded proto-planets. The protoplanetary disk is treated as a three-dimensional (3D), self-gravitating gas whose motion is described by the locally isothermal Navier-Stokes equations in a spherical coordinate centered on the star. The differential equations for the disk are similar to those given in Kley et al. (2009) with a different gravitational potential that is defined in Nelson et al. (2000). The equations are solved by directional split Godunov method for the inviscid Euler equations plus operator-split method for the viscous source terms. We use a sub-cycling technique for the azimuthal sweep to alleviate the time step restriction. We also extend the FARGO scheme of Masset (2000) and modified in Li et al. (2001) to our 3D code to accelerate the transport in the azimuthal direction. Furthermore, we have implemented a reduced 2D (r, {theta}) and a fully 3D self-gravity solver on our uniform disk grid, which extends our 2D method (Li, Buoni, & Li 2008) to 3D. This solver uses a mode cut-off strategy and combines FFT in the azimuthal direction and direct summation in the radial and meridional direction. An initial axis-symmetric equilibrium disk is generated via iteration between the disk density profile and the 2D disk-self-gravity. We do not need any softening in the disk self-gravity calculation as we have used a shifted grid method (Li et al. 2008) to calculate the potential. The motion of the planet is limited on the mid-plane and the equations are the same as given in D'Angelo et al. (2005), which we adapted to the polar coordinates with a fourth-order Runge-Kutta solver. The disk gravitational force on the planet is assumed to evolve linearly with time between two hydrodynamics time steps. The Planetary potential acting on the disk is calculated accurately with a small softening given by a cubic-spline form (Kley et al. 2009). Since the torque is extremely sensitive to

  6. Coded strobing photography: compressive sensing of high speed periodic videos.

    PubMed

    Veeraraghavan, Ashok; Reddy, Dikpal; Raskar, Ramesh

    2011-04-01

    We show that, via temporal modulation, one can observe and capture a high-speed periodic video well beyond the abilities of a low-frame-rate camera. By strobing the exposure with unique sequences within the integration time of each frame, we take coded projections of dynamic events. From a sequence of such frames, we reconstruct a high-speed video of the high-frequency periodic process. Strobing is used in entertainment, medical imaging, and industrial inspection to generate lower beat frequencies. But this is limited to scenes with a detectable single dominant frequency and requires high-intensity lighting. In this paper, we address the problem of sub-Nyquist sampling of periodic signals and show designs to capture and reconstruct such signals. The key result is that for such signals, the Nyquist rate constraint can be imposed on the strobe rate rather than the sensor rate. The technique is based on intentional aliasing of the frequency components of the periodic signal while the reconstruction algorithm exploits recent advances in sparse representations and compressive sensing. We exploit the sparsity of periodic signals in the Fourier domain to develop reconstruction algorithms that are inspired by compressive sensing.

  7. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation

    PubMed Central

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2013-01-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method. PMID:23750314

  8. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.

    PubMed

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-15

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  9. 2D and 3D stereoscopic videos used as pre-anatomy lab tools improve students' examination performance in a veterinary gross anatomy course.

    PubMed

    Al-Khalili, Sereen M; Coppoc, Gordon L

    2014-01-01

    The hypothesis for the research described in this article was that viewing an interactive two-dimensional (2D) or three-dimensional (3D) stereoscopic pre-laboratory video would improve efficiency and learning in the laboratory. A first-year DVM class was divided into 21 dissection teams of four students each. Primary variables were method of preparation (2D, 3D, or laboratory manual) and dissection region (thorax, abdomen, or pelvis). Teams were randomly assigned to a group (A, B, or C) in a crossover design experiment so that all students experienced each of the modes of preparation, but with different regions of the canine anatomy. All students were instructed to study normal course materials and the laboratory manual, the Guide, before coming to the laboratory session and to use them during the actual dissection as usual. Video groups were given a DVD with an interactive 10-12 minute video to view for the first 30 minutes of the laboratory session, while non-video groups were instructed to review the Guide. All groups were allowed 45 minutes to dissect the assigned section and find a list of assigned structures, after which all groups took a post-dissection quiz and attitudinal survey. The 2D groups performed better than the Guide groups (p=.028) on the post-dissection quiz, despite the fact that only a minority of the 2D-group students studied the Guide as instructed. There was no significant difference (p>.05) between 2D and 3D groups on the post-dissection quiz. Students preferred videos over the Guide.

  10. Modeling Warm Dense Matter Experiments using the 3D ALE-AMR Code and the Move Toward Exascale Computing

    SciTech Connect

    Koniges, A; Eder, E; Liu, W; Barnard, J; Friedman, A; Logan, G; Fisher, A; Masers, N; Bertozzi, A

    2011-11-04

    The Neutralized Drift Compression Experiment II (NDCX II) is an induction accelerator planned for initial commissioning in 2012. The final design calls for a 3 MeV, Li+ ion beam, delivered in a bunch with characteristic pulse duration of 1 ns, and transverse dimension of order 1 mm. The NDCX II will be used in studies of material in the warm dense matter (WDM) regime, and ion beam/hydrodynamic coupling experiments relevant to heavy ion based inertial fusion energy. We discuss recent efforts to adapt the 3D ALE-AMR code to model WDM experiments on NDCX II. The code, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR), has physics models that include ion deposition, radiation hydrodynamics, thermal diffusion, anisotropic material strength with material time history, and advanced models for fragmentation. Experiments at NDCX-II will explore the process of bubble and droplet formation (two-phase expansion) of superheated metal solids using ion beams. Experiments at higher temperatures will explore equation of state and heavy ion fusion beam-to-target energy coupling efficiency. Ion beams allow precise control of local beam energy deposition providing uniform volumetric heating on a timescale shorter than that of hydrodynamic expansion. The ALE-AMR code does not have any export control restrictions and is currently running at the National Energy Research Scientific Computing Center (NERSC) at LBNL and has been shown to scale well to thousands of CPUs. New surface tension models that are being implemented and applied to WDM experiments. Some of the approaches use a diffuse interface surface tension model that is based on the advective Cahn-Hilliard equations, which allows for droplet breakup in divergent velocity fields without the need for imposed perturbations. Other methods require seeding or other methods for droplet breakup. We also briefly discuss the effects of the move to exascale computing and related

  11. Sparse Representation with Spatio-Temporal Online Dictionary Learning for Efficient Video Coding.

    PubMed

    Dai, Wenrui; Shen, Yangmei; Tang, Xin; Zou, Junni; Xiong, Hongkai; Chen, Chang Wen

    2016-07-27

    Classical dictionary learning methods for video coding suer from high computational complexity and interfered coding eciency by disregarding its underlying distribution. This paper proposes a spatio-temporal online dictionary learning (STOL) algorithm to speed up the convergence rate of dictionary learning with a guarantee of approximation error. The proposed algorithm incorporates stochastic gradient descents to form a dictionary of pairs of 3-D low-frequency and highfrequency spatio-temporal volumes. In each iteration of the learning process, it randomly selects one sample volume and updates the atoms of dictionary by minimizing the expected cost, rather than optimizes empirical cost over the complete training data like batch learning methods, e.g. K-SVD. Since the selected volumes are supposed to be i.i.d. samples from the underlying distribution, decomposition coecients attained from the trained dictionary are desirable for sparse representation. Theoretically, it is proved that the proposed STOL could achieve better approximation for sparse representation than K-SVD and maintain both structured sparsity and hierarchical sparsity. It is shown to outperform batch gradient descent methods (K-SVD) in the sense of convergence speed and computational complexity, and its upper bound for prediction error is asymptotically equal to the training error. With lower computational complexity, extensive experiments validate that the STOL based coding scheme achieves performance improvements than H.264/AVC or HEVC as well as existing super-resolution based methods in ratedistortion performance and visual quality.

  12. A study of the earth radiation budget using a 3D Monte-Carlo radiative transer code

    NASA Astrophysics Data System (ADS)

    Okata, M.; Nakajima, T.; Sato, Y.; Inoue, T.; Donovan, D. P.

    2013-12-01

    The purpose of this study is to evaluate the earth's radiation budget when data are available from satellite-borne active sensors, i.e. cloud profiling radar (CPR) and lidar, and a multi-spectral imager (MSI) in the project of the Earth Explorer/EarthCARE mission. For this purpose, we first developed forward and backward 3D Monte Carlo radiative transfer codes that can treat a broadband solar flux calculation including thermal infrared emission calculation by k-distribution parameters of Sekiguchi and Nakajima (2008). In order to construct the 3D cloud field, we tried the following three methods: 1) stochastic cloud generated by randomized optical thickness each layer distribution and regularly-distributed tilted clouds, 2) numerical simulations by a non-hydrostatic model with bin cloud microphysics model and 3) Minimum cloud Information Deviation Profiling Method (MIDPM) as explained later. As for the method-2 (numerical modeling method), we employed numerical simulation results of Californian summer stratus clouds simulated by a non-hydrostatic atmospheric model with a bin-type cloud microphysics model based on the JMA NHM model (Iguchi et al., 2008; Sato et al., 2009, 2012) with horizontal (vertical) grid spacing of 100m (20m) and 300m (20m) in a domain of 30km (x), 30km (y), 1.5km (z) and with a horizontally periodic lateral boundary condition. Two different cell systems were simulated depending on the cloud condensation nuclei (CCN) concentration. In the case of horizontal resolution of 100m, regionally averaged cloud optical thickness, , and standard deviation of COT, were 3.0 and 4.3 for pristine case and 8.5 and 7.4 for polluted case, respectively. In the MIDPM method, we first construct a library of pair of observed vertical profiles from active sensors and collocated imager products at the nadir footprint, i.e. spectral imager radiances, cloud optical thickness (COT), effective particle radius (RE) and cloud top temperature (Tc). We then select a

  13. Parameter analysis for a high-gain harmonic generation FEL using a recently developed 3D polychromatic code.

    SciTech Connect

    Biedron, S. G.; Freund, H. P.; Yu, L.-H.

    1999-09-10

    One possible design for a fourth-generation light source is the high-gain harmonic generation (HGHG) free-electron laser (FEL). Here, a coherent seed with a wavelength at a subharmonic of the desired output radiation interacts with the electron beam in an energy-modulating section. This energy modulation is then converted into spatial bunching while traversing a dispersive section (a three-dipole chicane). The final step is passage through a radiative section, an undulator tuned to the desired higher harmonic output wavelength. The coherent seed serves to remove noise and can be at a much lower subharmonic of the output radiation, thus eliminating the concerns found in self-amplified spontaneous emission (SASE) and seeded FELs, respectively. Recently, a 3D code that includes multiple frequencies, multiple undulatory (both in quantity and/or type), quadruple magnets, and dipole magnets was developed to easily simulate HGHG. Here, a brief review of the HGHG theory, the code development, the Accelerator Test Facility's (ATF) HGHG FEL experimental parameters, and the parameter analysis from simulations of this specific experiment will be discussed.

  14. Runaway electron distributions obtained with the CQL3D Fokker-Planck code under tokamak disruption conditions

    SciTech Connect

    Harvey, R.W.; Chan, V.S.

    1996-12-31

    Runaway of electrons to high energy during plasma disruptions occurs due to large induced toroidal electric fields which tend to maintain the toroidal plasma current, in accord with Lenz law. This has been observed in many tokamaks. Within the closed flux surfaces, the bounce-averaged CQL3D Fokker-Planck code is well suited to obtain the resulting electron distributions, nonthermal contributions to electrical conductivity, and runaway rates. The time-dependent 2D in momentum-space (p{sub {parallel}} and p{sub {perpendicular}}) distributions axe calculated on a radial array of noncircular flux surfaces, including bounce-averaging of the Fokker-Planck equation to account for toroidal trapping effects. In the steady state, the resulting distributions represent a balance between applied toroidal electric field, relativistic Coulomb collisions, and synchrotron radiation. The code can be run in a mode where the electrons are sourced at low velocity and run off the high velocity edge of the computational mesh, giving runaway rates at steady state. At small minor radius, the results closely match previous results reported by Kulsrud et al. It is found that the runaway rate has a strong dependence on inverse aspect ratio e, decreasing by a factor {approx} 5 as e increases from 0.0 to 0.3. The code can also be run with a radial diffusion and pinching term, simulating radial transport with plasma pinching to maintain a given density profile. Results show a transport reduction of runaways in the plasma center, and an enhancement towards the edge due to the electrons from the plasma center. Avalanching of runaways due to a knock-on electron source is being included.

  15. Analyzing Structure and Function of Vascularization in Engineered Bone Tissue by Video-Rate Intravital Microscopy and 3D Image Processing

    PubMed Central

    Pang, Yonggang; Tsigkou, Olga; Spencer, Joel A.; Lin, Charles P.; Neville, Craig

    2015-01-01

    Vascularization is a key challenge in tissue engineering. Three-dimensional structure and microcirculation are two fundamental parameters for evaluating vascularization. Microscopic techniques with cellular level resolution, fast continuous observation, and robust 3D postimage processing are essential for evaluation, but have not been applied previously because of technical difficulties. In this study, we report novel video-rate confocal microscopy and 3D postimage processing techniques to accomplish this goal. In an immune-deficient mouse model, vascularized bone tissue was successfully engineered using human bone marrow mesenchymal stem cells (hMSCs) and human umbilical vein endothelial cells (HUVECs) in a poly (d,l-lactide-co-glycolide) (PLGA) scaffold. Video-rate (30 FPS) intravital confocal microscopy was applied in vitro and in vivo to visualize the vascular structure in the engineered bone and the microcirculation of the blood cells. Postimage processing was applied to perform 3D image reconstruction, by analyzing microvascular networks and calculating blood cell viscosity. The 3D volume reconstructed images show that the hMSCs served as pericytes stabilizing the microvascular network formed by HUVECs. Using orthogonal imaging reconstruction and transparency adjustment, both the vessel structure and blood cells within the vessel lumen were visualized. Network length, network intersections, and intersection densities were successfully computed using our custom-developed software. Viscosity analysis of the blood cells provided functional evaluation of the microcirculation. These results show that by 8 weeks, the blood vessels in peripheral areas function quite similarly to the host vessels. However, the viscosity drops about fourfold where it is only 0.8 mm away from the host. In summary, we developed novel techniques combining intravital microscopy and 3D image processing to analyze the vascularization in engineered bone. These techniques have broad

  16. Development and application of a ray-tracing code integrating with 3D equilibrium mapping in LHD ECH experiments

    NASA Astrophysics Data System (ADS)

    Tsujimura, T., Ii; Kubo, S.; Takahashi, H.; Makino, R.; Seki, R.; Yoshimura, Y.; Igami, H.; Shimozuma, T.; Ida, K.; Suzuki, C.; Emoto, M.; Yokoyama, M.; Kobayashi, T.; Moon, C.; Nagaoka, K.; Osakabe, M.; Kobayashi, S.; Ito, S.; Mizuno, Y.; Okada, K.; Ejiri, A.; Mutoh, T.

    2015-11-01

    The central electron temperature has successfully reached up to 7.5 keV in large helical device (LHD) plasmas with a central high-ion temperature of 5 keV and a central electron density of 1.3× {{10}19} m-3. This result was obtained by heating with a newly-installed 154 GHz gyrotron and also the optimisation of injection geometry in electron cyclotron heating (ECH). The optimisation was carried out by using the ray-tracing code ‘LHDGauss’, which was upgraded to include the rapid post-processing three-dimensional (3D) equilibrium mapping obtained from experiments. For ray-tracing calculations, LHDGauss can automatically read the relevant data registered in the LHD database after a discharge, such as ECH injection settings (e.g. Gaussian beam parameters, target positions, polarisation and ECH power) and Thomson scattering diagnostic data along with the 3D equilibrium mapping data. The equilibrium map of the electron density and temperature profiles are then extrapolated into the region outside the last closed flux surface. Mode purity, or the ratio between the ordinary mode and the extraordinary mode, is obtained by calculating the 1D full-wave equation along the direction of the rays from the antenna to the absorption target point. Using the virtual magnetic flux surfaces, the effects of the modelled density profiles and the magnetic shear at the peripheral region with a given polarisation are taken into account. Power deposition profiles calculated for each Thomson scattering measurement timing are registered in the LHD database. The adjustment of the injection settings for the desired deposition profile from the feedback provided on a shot-by-shot basis resulted in an effective experimental procedure.

  17. Real-time 3D video utilizing a compressed sensing time-of-flight single-pixel camera

    NASA Astrophysics Data System (ADS)

    Edgar, Matthew P.; Sun, Ming-Jie; Gibson, Graham M.; Spalding, Gabriel C.; Phillips, David B.; Padgett, Miles J.

    2016-09-01

    Time-of-flight 3D imaging is an important tool for applications such as remote sensing, machine vision and autonomous navigation. Conventional time-of-flight three-dimensional imaging systems that utilize a raster scanned laser to measure the range of each pixel in the scene sequentially, inherently have acquisition times that scale directly with the resolution. Here we show a modified time-of-flight 3D camera employing structured illumination, which uses a visible camera to enable a novel compressed sensing technique, minimising the acquisition time as well as providing a high-resolution reflectivity map for image overlay. Furthermore, a quantitative assessment of the 3D imaging performance is provided.

  18. Design and implementation of H.264 based embedded video coding technology

    NASA Astrophysics Data System (ADS)

    Mao, Jian; Liu, Jinming; Zhang, Jiemin

    2016-03-01

    In this paper, an embedded system for remote online video monitoring was designed and developed to capture and record the real-time circumstances in elevator. For the purpose of improving the efficiency of video acquisition and processing, the system selected Samsung S5PV210 chip as the core processor which Integrated graphics processing unit. And the video was encoded with H.264 format for storage and transmission efficiently. Based on S5PV210 chip, the hardware video coding technology was researched, which was more efficient than software coding. After running test, it had been proved that the hardware video coding technology could obviously reduce the cost of system and obtain the more smooth video display. It can be widely applied for the security supervision [1].

  19. A unified framework of unsupervised subjective optimized bit allocation for multiple video object coding

    NASA Astrophysics Data System (ADS)

    Chen, Zhenzhong; Han, Junwei; Ngan, King Ngi

    2005-10-01

    MPEG-4 treats a scene as a composition of several objects or so-called video object planes (VOPs) that are separately encoded and decoded. Such a flexible video coding framework makes it possible to code different video object with different distortion scale. It is necessary to analyze the priority of the video objects according to its semantic importance, intrinsic properties and psycho-visual characteristics such that the bit budget can be distributed properly to video objects to improve the perceptual quality of the compressed video. This paper aims to provide an automatic video object priority definition method based on object-level visual attention model and further propose an optimization framework for video object bit allocation. One significant contribution of this work is that the human visual system characteristics are incorporated into the video coding optimization process. Another advantage is that the priority of the video object can be obtained automatically instead of fixing weighting factors before encoding or relying on the user interactivity. To evaluate the performance of the proposed approach, we compare it with traditional verification model bit allocation and the optimal multiple video object bit allocation algorithms. Comparing with traditional bit allocation algorithms, the objective quality of the object with higher priority is significantly improved under this framework. These results demonstrate the usefulness of this unsupervised subjective quality lifting framework.

  20. Low Complexity Mode Decision for 3D-HEVC

    PubMed Central

    Li, Nana; Gan, Yong

    2014-01-01

    High efficiency video coding- (HEVC-) based 3D video coding (3D-HEVC) developed by joint collaborative team on 3D video coding (JCT-3V) for multiview video and depth map is an extension of HEVC standard. In the test model of 3D-HEVC, variable coding unit (CU) size decision and disparity estimation (DE) are introduced to achieve the highest coding efficiency with the cost of very high computational complexity. In this paper, a fast mode decision algorithm based on variable size CU and DE is proposed to reduce 3D-HEVC computational complexity. The basic idea of the method is to utilize the correlations between depth map and motion activity in prediction mode where variable size CU and DE are needed, and only in these regions variable size CU and DE are enabled. Experimental results show that the proposed algorithm can save about 43% average computational complexity of 3D-HEVC while maintaining almost the same rate-distortion (RD) performance. PMID:25254237

  1. Context-adaptive binary arithmetic coding with precise probability estimation and complexity scalability for high-efficiency video coding

    NASA Astrophysics Data System (ADS)

    Karwowski, Damian; Domański, Marek

    2016-01-01

    An improved context-based adaptive binary arithmetic coding (CABAC) is presented. The idea for the improvement is to use a more accurate mechanism for estimation of symbol probabilities in the standard CABAC algorithm. The authors' proposal of such a mechanism is based on the context-tree weighting technique. In the framework of a high-efficiency video coding (HEVC) video encoder, the improved CABAC allows 0.7% to 4.5% bitrate saving compared to the original CABAC algorithm. The application of the proposed algorithm marginally affects the complexity of HEVC video encoder, but the complexity of video decoder increases by 32% to 38%. In order to decrease the complexity of video decoding, a new tool has been proposed for the improved CABAC that enables scaling of the decoder complexity. Experiments show that this tool gives 5% to 7.5% reduction of the decoding time while still maintaining high efficiency in the data compression.

  2. Application of the Finite Orbit Width Version of the CQL3D Code to NBI +RF Heating of NSTX Plasma

    NASA Astrophysics Data System (ADS)

    Petrov, Yu. V.; Harvey, R. W.

    2015-11-01

    The CQL3D bounce-averaged Fokker-Planck (FP) code has been upgraded to include Finite-Orbit-Width (FOW) effects. The calculations can be done either with a fast Hybrid-FOW option or with a slower but neoclassically complete full-FOW option. The banana regime neoclassical radial transport appears naturally in the full-FOW version by averaging the local collision coefficients along guiding center orbits, with a proper transformation matrix from local (R, Z) coordinates to the midplane computational coordinates, where the FP equation is solved. In a similar way, the local quasilinear rf diffusion terms give rise to additional radial transport of orbits. The full-FOW version is applied to simulation of ion heating in NSTX plasma. It is demonstrated that it can describe the physics of transport phenomena in plasma with auxiliary heating, in particular, the enhancement of the radial transport of ions by RF heating and the occurrence of the bootstrap current. Because of the bounce-averaging on the FPE, the results are obtained in a relatively short computational time. A typical full-FOW run time is 30 min using 140 MPI cores. Due to an implicit solver, calculations with a large time step (tested up to dt = 0.5 sec) remain stable. Supported by USDOE grants SC0006614, ER54744, and ER44649.

  3. Implementation of a 3D version of ponderomotive guiding center solver in particle-in-cell code OSIRIS

    NASA Astrophysics Data System (ADS)

    Helm, Anton; Vieira, Jorge; Silva, Luis; Fonseca, Ricardo

    2016-10-01

    Laser-driven accelerators gained an increased attention over the past decades. Typical modeling techniques for laser wakefield acceleration (LWFA) are based on particle-in-cell (PIC) simulations. PIC simulations, however, are very computationally expensive due to the disparity of the relevant scales ranging from the laser wavelength, in the micrometer range, to the acceleration length, currently beyond the ten centimeter range. To minimize the gap between these despair scales the ponderomotive guiding center (PGC) algorithm is a promising approach. By describing the evolution of the laser pulse envelope separately, only the scales larger than the plasma wavelength are required to be resolved in the PGC algorithm, leading to speedups in several orders of magnitude. Previous work was limited to two dimensions. Here we present the implementation of the 3D version of a PGC solver into the massively parallel, fully relativistic PIC code OSIRIS. We extended the solver to include periodic boundary conditions and parallelization in all spatial dimensions. We present benchmarks for distributed and shared memory parallelization. We also discuss the stability of the PGC solver.

  4. The calculation of static polarizabilities of 1-3D periodic compounds. the implementation in the crystal code.

    PubMed

    Ferrero, Mauro; Rérat, Michel; Orlando, Roberto; Dovesi, Roberto

    2008-07-15

    The Coupled Perturbed Hartree-Fock (CPHF) scheme has been implemented in the CRYSTAL06 program, that uses a gaussian type basis set, for systems periodic in 1D (polymers), 2D (slabs), 3D (crystals) and, as a limiting case, 0D (molecules), which enables comparison with molecular codes. CPHF is applied to the calculation of the polarizability alpha of LiF in different aggregation states: finite and infinite chains, slabs, and cubic crystal. Correctness of the computational scheme for the various dimensionalities and its numerical efficiency are confirmed by the correct trend of alpha: alpha for a finite linear chain containing N LiF units with large N tends to the value for the infinite chain, N parallel chains give the slab value when N is sufficiently large, and N superimposed slabs tend to the bulk value. CPHF results compare well with those obtained with a saw-tooth potential approach, previously implemented in CRYSTAL. High numerical accuracy can easily be achieved at relatively low cost, with the same kind of dependence on the computational parameters as for the SCF cycle. Overall, the cost of one component of the dielectric tensor is roughly the same as for the SCF cycle, and it is dominated by the calculation of two-electron four-center integrals.

  5. A scalable diffraction-based scanning 3D colour video display as demonstrated by using tiled gratings and a vertical diffuser

    PubMed Central

    Jia, Jia; Chen, Jhensi; Yao, Jun; Chu, Daping

    2017-01-01

    A high quality 3D display requires a high amount of optical information throughput, which needs an appropriate mechanism to distribute information in space uniformly and efficiently. This study proposes a front-viewing system which is capable of managing the required amount of information efficiently from a high bandwidth source and projecting 3D images with a decent size and a large viewing angle at video rate in full colour. It employs variable gratings to support a high bandwidth distribution. This concept is scalable and the system can be made compact in size. A horizontal parallax only (HPO) proof-of-concept system is demonstrated by projecting holographic images from a digital micro mirror device (DMD) through rotational tiled gratings before they are realised on a vertical diffuser for front-viewing. PMID:28304371

  6. A scalable diffraction-based scanning 3D colour video display as demonstrated by using tiled gratings and a vertical diffuser

    NASA Astrophysics Data System (ADS)

    Jia, Jia; Chen, Jhensi; Yao, Jun; Chu, Daping

    2017-03-01

    A high quality 3D display requires a high amount of optical information throughput, which needs an appropriate mechanism to distribute information in space uniformly and efficiently. This study proposes a front-viewing system which is capable of managing the required amount of information efficiently from a high bandwidth source and projecting 3D images with a decent size and a large viewing angle at video rate in full colour. It employs variable gratings to support a high bandwidth distribution. This concept is scalable and the system can be made compact in size. A horizontal parallax only (HPO) proof-of-concept system is demonstrated by projecting holographic images from a digital micro mirror device (DMD) through rotational tiled gratings before they are realised on a vertical diffuser for front-viewing.

  7. DYNA3D: A nonlinear, explicit, three-dimensional finite element code for solid and structural mechanics, User manual. Revision 1

    SciTech Connect

    Whirley, R.G.; Engelmann, B.E.

    1993-11-01

    This report is the User Manual for the 1993 version of DYNA3D, and also serves as a User Guide. DYNA3D is a nonlinear, explicit, finite element code for analyzing the transient dynamic response of three-dimensional solids and structures. The code is fully vectorized and is available on several computer platforms. DYNA3D includes solid, shell, beam, and truss elements to allow maximum flexibility in modeling physical problems. Many material models are available to represent a wide range of material behavior, including elasticity, plasticity, composites, thermal effects, and rate dependence. In addition, DYNA3D has a sophisticated contact interface capability, including frictional sliding and single surface contact. Rigid materials provide added modeling flexibility. A material model driver with interactive graphics display is incorporated into DYNA3D to permit accurate modeling of complex material response based on experimental data. Along with the DYNA3D Example Problem Manual, this document provides the information necessary to apply DYNA3D to solve a wide range of engineering analysis problems.

  8. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  9. 3D-Reconstruction of recent volcanic activity from ROV-video, Charles Darwin Seamounts, Cape Verdes

    NASA Astrophysics Data System (ADS)

    Kwasnitschka, T.; Hansteen, T. H.; Kutterolf, S.; Freundt, A.; Devey, C. W.

    2011-12-01

    As well as providing well-localized samples, Remotely Operated Vehicles (ROVs) produce huge quantities of visual data whose potential for geological data mining has seldom if ever been fully realized. We present a new workflow to derive essential results of field geology such as quantitative stratigraphy and tectonic surveying from ROV-based photo and video material. We demonstrate the procedure on the Charles Darwin Seamounts, a field of small hot spot volcanoes recently identified at a depth of ca. 3500m southwest of the island of Santo Antao in the Cape Verdes. The Charles Darwin Seamounts feature a wide spectrum of volcanic edifices with forms suggestive of scoria cones, lava domes, tuff rings and maar-type depressions, all of comparable dimensions. These forms, coupled with the highly fragmented volcaniclastic samples recovered by dredging, motivated surveying parts of some edifices down to centimeter scale. ROV-based surveys yielded volcaniclastic samples of key structures linked by extensive coverage of stereoscopic photographs and high-resolution video. Based upon the latter, we present our workflow to derive three-dimensional models of outcrops from a single-camera video sequence, allowing quantitative measurements of fault orientation, bedding structure, grain size distribution and photo mosaicking within a geo-referenced framework. With this information we can identify episodes of repetitive eruptive activity at individual volcanic centers and see changes in eruptive style over time, which, despite their proximity to each other, is highly variable.

  10. Comparison of the Aerospace Systems Test Reactor loss-of-coolant test data with predictions of the 3D-AIRLOCA code

    SciTech Connect

    Warinner, D.K.

    1983-01-01

    This paper compares the predictions of the revised 3D-AIRLOCA computer code to those data available from the Aerospace Systems Test Reactor's (ASTR's) loss-of-coolant-accident (LOCA) tests run in 1964. The theoretical and experimental hot-spot temperature responses compare remarkably well. In the thirteen cases studied, the irradiation powers varied from 0.4 to 8.87 MW; the irradiation times were 300, 1540, 1800, and 10/sup 4/ s. The degrees of agreement between the data and predictions provide an experimental validation of the 3D-AIRLOCA code.

  11. TFaNS Tone Fan Noise Design/Prediction System. Volume 1; System Description, CUP3D Technical Documentation and Manual for Code Developers

    NASA Technical Reports Server (NTRS)

    Topol, David A.

    1999-01-01

    TFaNS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFaNS consists of: The codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. Cup3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report provides technical background for TFaNS including the organization of the system and CUP3D technical documentation. This document also provides information for code developers who must write Acoustic Property Files in the CUP3D format. This report is divided into three volumes: Volume I: System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume II: User's Manual, TFaNS Vers. 1.4; Volume III: Evaluation of System Codes.

  12. DCT/DST-based transform coding for intra prediction in image/video coding.

    PubMed

    Saxena, Ankur; Fernandes, Felix C

    2013-10-01

    In this paper, we present a DCT/DST based transform scheme that applies either the conventional DCT or type-7 DST for all the video-coding intra-prediction modes: vertical, horizontal, and oblique. Our approach is applicable to any block-based intra prediction scheme in a codec that employs transforms along the horizontal and vertical direction separably. Previously, Han, Saxena, and Rose showed that for the intra-predicted residuals of horizontal and vertical modes, the DST is the optimal transform with performance close to the KLT. Here, we prove that this is indeed the case for the other oblique modes. The optimal choice of using DCT or DST is based on intra-prediction modes and requires no additional signaling information or rate-distortion search. The DCT/DST scheme presented in this paper was adopted in the HEVC standardization in March 2011. Further simplifications, especially to reduce implementation complexity, which remove the mode-dependency between DCT and DST, and simply always use DST for the 4 × 4 intra luma blocks, were adopted in the HEVC standard in July 2012. Simulation results conducted for the DCT/DST algorithm are shown in the reference software for the ongoing HEVC standardization. Our results show that the DCT/DST scheme provides significant BD-rate improvement over the conventional DCT based scheme for intra prediction in video sequences.

  13. Chroma sampling and modulation techniques in high dynamic range video coding

    NASA Astrophysics Data System (ADS)

    Dai, Wei; Krishnan, Madhu; Topiwala, Pankaj

    2015-09-01

    High Dynamic Range and Wide Color Gamut (HDR/WCG) Video Coding is an area of intense research interest in the engineering community, for potential near-term deployment in the marketplace. HDR greatly enhances the dynamic range of video content (up to 10,000 nits), as well as broadens the chroma representation (BT.2020). The resulting content offers new challenges in its coding and transmission. The Moving Picture Experts Group (MPEG) of the International Standards Organization (ISO) is currently exploring coding efficiency and/or the functionality enhancements of the recently developed HEVC video standard for HDR and WCG content. FastVDO has developed an advanced approach to coding HDR video, based on splitting the HDR signal into a smoothed luminance (SL) signal, and an associated base signal (B). Both signals are then chroma downsampled to YFbFr 4:2:0 signals, using advanced resampling filters, and coded using the Main10 High Efficiency Video Coding (HEVC) standard, which has been developed jointly by ISO/IEC MPEG and ITU-T WP3/16 (VCEG). Our proposal offers both efficient coding, and backwards compatibility with the existing HEVC Main10 Profile. That is, an existing Main10 decoder can produce a viewable standard dynamic range video, suitable for existing screens. Subjective tests show visible improvement over the anchors. Objective tests show a sizable gain of over 25% in PSNR (RGB domain) on average, for a key set of test clips selected by the ISO/MPEG committee.

  14. Suppressing feedback in a distributed video coding system by employing real field codes

    NASA Astrophysics Data System (ADS)

    Louw, Daniel J.; Kaneko, Haruhiko

    2013-12-01

    Single-view distributed video coding (DVC) is a video compression method that allows for the computational complexity of the system to be shifted from the encoder to the decoder. The reduced encoding complexity makes DVC attractive for use in systems where processing power or energy use at the encoder is constrained, for example, in wireless devices and surveillance systems. One of the biggest challenges in implementing DVC systems is that the required rate must be known at the encoder. The conventional approach is to use a feedback channel from the decoder to control the rate. Feedback channels introduce their own difficulties such as increased latency and buffering requirements, which makes the resultant system unsuitable for some applications. Alternative approaches, which do not employ feedback, suffer from either increased encoder complexity due to performing motion estimation at the encoder, or an inaccurate rate estimate. Inaccurate rate estimates can result in a reduced average rate-distortion performance, as well as unpleasant visual artifacts. In this paper, the authors propose a single-view DVC system that does not require a feedback channel. The consequences of inaccuracies in the rate estimate are addressed by using codes defined over the real field and a decoder employing successive refinement. The result is a codec with performance that is comparable to that of a feedback-based system at low rates without the use of motion estimation at the encoder or a feedback path. The disadvantage of the approach is a reduction in average rate-distortion performance in the high-rate regime for sequences with significant motion.

  15. Utilising E-on Vue and Unity 3D scenes to generate synthetic images and videos for visible signature analysis

    NASA Astrophysics Data System (ADS)

    Madden, Christopher S.; Richards, Noel J.; Culpepper, Joanne B.

    2016-10-01

    This paper investigates the ability to develop synthetic scenes in an image generation tool, E-on Vue, and a gaming engine, Unity 3D, which can be used to generate synthetic imagery of target objects across a variety of conditions in land environments. Developments within these tools and gaming engines have allowed the computer gaming industry to dramatically enhance the realism of the games they develop; however they utilise short cuts to ensure that the games run smoothly in real-time to create an immersive effect. Whilst these short cuts may have an impact upon the realism of the synthetic imagery, they do promise a much more time efficient method of developing imagery of different environmental conditions and to investigate the dynamic aspect of military operations that is currently not evaluated in signature analysis. The results presented investigate how some of the common image metrics used in target acquisition modelling, namely the Δμ1, Δμ2, Δμ3, RSS, and Doyle metrics, perform on the synthetic scenes generated by E-on Vue and Unity 3D compared to real imagery of similar scenes. An exploration of the time required to develop the various aspects of the scene to enhance its realism are included, along with an overview of the difficulties associated with trying to recreate specific locations as a virtual scene. This work is an important start towards utilising virtual worlds for visible signature evaluation, and evaluating how equivalent synthetic imagery is to real photographs.

  16. Joint source coding, transport processing, and error concealment for H.323-based packet video

    NASA Astrophysics Data System (ADS)

    Zhu, Qin-Fan; Kerofsky, Louis

    1998-12-01

    In this paper, we investigate how to adapt different parameters in H.263 source coding, transport processing and error concealment to optimize end-to-end video quality at different bitrates and packet loss rates for H.323-based packet video. First different intra coding patterns are compared and we show that the contiguous rectangle or square block pattern offers the best performance in terms of video quality in the presence of packet loss. Second, the optimal intra coding frequency is found for different bitrates and packet loss rates. The optimal number of GOB headers to be inserted in the source coding is then determined. The effect of transport processing strategies such as packetization and retransmission is also examined. For packetization, the impact of packet size and the effect of macroblock segmentation to picture quality are investigated. Finally, we show that the dejitter buffering delay can be used to the advantage for packet loss recovery with video retransmission without incurring any extra delay.

  17. Test Problems for Reactive Flow HE Model in the ALE3D Code and Limited Sensitivity Study

    SciTech Connect

    Gerassimenko, M.

    2000-03-01

    We document quick running test problems for a reactive flow model of HE initiation incorporated into ALE3D. A quarter percent change in projectile velocity changes the outcome from detonation to HE burn that dies down. We study the sensitivity of calculated HE behavior to several parameters of practical interest where modeling HE initiation with ALE3D.

  18. Semi-automatic 2D-to-3D conversion of human-centered videos enhanced by age and gender estimation

    NASA Astrophysics Data System (ADS)

    Fard, Mani B.; Bayazit, Ulug

    2014-01-01

    In this work, we propose a feasible 3D video generation method to enable high quality visual perception using a monocular uncalibrated camera. Anthropometric distances between face standard landmarks are approximated based on the person's age and gender. These measurements are used in a 2-stage approach to facilitate the construction of binocular stereo images. Specifically, one view of the background is registered in initial stage of video shooting. It is followed by an automatically guided displacement of the camera toward its secondary position. At the secondary position the real-time capturing is started and the foreground (viewed person) region is extracted for each frame. After an accurate parallax estimation the extracted foreground is placed in front of the background image that was captured at the initial position. So the constructed full view of the initial position combined with the view of the secondary (current) position, form the complete binocular pairs during real-time video shooting. The subjective evaluation results present a competent depth perception quality through the proposed system.

  19. A multiblock/multizone code (PAB 3D-v2) for the three-dimensional Navier-Stokes equations: Preliminary applications

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.

    1990-01-01

    The development and applications of multiblock/multizone and adaptive grid methodologies for solving the three-dimensional simplified Navier-Stokes equations are described. Adaptive grid and multiblock/multizone approaches are introduced and applied to external and internal flow problems. These new implementations increase the capabilities and flexibility of the PAB3D code in solving flow problems associated with complex geometry.

  20. Robust wireless video transmission employing byte-aligned variable-length turbo code

    NASA Astrophysics Data System (ADS)

    Lee, ChangWoo; Kim, JongWon

    2002-01-01

    Video transmission over the multi-path fading wireless channel has to overcome the inherent vulnerability of compressed video to the channel errors. To effectively prevent the corruption of video stream and its propagation in spatial and temporal domain, proactive error controls are widely being deployed. Among possible candidates, turbo code is known to exhibit superior error correction performance over fading channel. Ordinary turbo codes, however, are not suitable to support the variable-size segment of the video stream. A version of turbo code, byte-aligned variable-length turbo code, is thus proposed and applied for the robust video transmission system. Protection performance of the proposed turbo code is evaluated by applying it to GOB-based variable-size ITU-T H.263+ video packets, where the protection level is controlled based on the joint source-channel criteria. The resulting performance comparison with the conventional RCPC code clearly demonstrates the possibility of the proposed approach for the time-varying correlated Rayleigh-fading channel.

  1. Robust video transmission based on multiple description scalable coding with EREC

    NASA Astrophysics Data System (ADS)

    Zheng, Haifeng; Yu, Lun; Chen, Chang Wen

    2005-07-01

    This paper presents a multiple description scalable video coding scheme based on overcomplete motion compensated temporal filtering, named MD-OMCTF, for robust video transmission over wireless and packet loss networks. The intrinsic nature of the structure of OMCTF and embedded coding with modified SPIHT algorithm enable us to provide fully scalable properties for the proposed scheme. We show that multiple description coding is very effective in combating with channel failures in both Internet and wireless video. The integration of MD with OMCTF allows us to achieve both loss resilience and complete scalability. In order to further improve error-resilience to channel bit error for this scheme and reduce error propagation in error-prone network, we apply error resilient entropy coding (EREC) to the multiple bitstreams to gain additional error resilience. With EREC, multiple bitstreams are reorganized into fixed-length slots so that synchronization of the beginning of each bitstream can be automatically obtained at the receiver. The integration of scalable coding and EREC with MDC enables the coded video bitstream to be adaptive to the varying channel condition and to be resilient to both transmission losses and bit errors. We also develop corresponding error concealment scheme to recover the lost or erroneous information during video transmission. Experimental results show that the proposed scheme is able to achieve robust video transmission over both wireless and packet loss networks.

  2. A Randomized Controlled Trial to Test the Effectiveness of an Immersive 3D Video Game for Anxiety Prevention among Adolescents

    PubMed Central

    Scholten, Hanneke; Malmberg, Monique; Lobel, Adam; Engels, Rutger C. M. E.; Granic, Isabela

    2016-01-01

    Adolescent anxiety is debilitating, the most frequently diagnosed adolescent mental health problem, and leads to substantial long-term problems. A randomized controlled trial (n = 138) was conducted to test the effectiveness of a biofeedback video game (Dojo) for adolescents with elevated levels of anxiety. Adolescents (11–15 years old) were randomly assigned to play Dojo or a control game (Rayman 2: The Great Escape). Initial screening for anxiety was done on 1,347 adolescents in five high schools; only adolescents who scored above the “at-risk” cut-off on the Spence Children Anxiety Survey were eligible. Adolescents’ anxiety levels were assessed at pre-test, post-test, and at three month follow-up to examine the extent to which playing Dojo decreased adolescents’ anxiety. The present study revealed equal improvements in anxiety symptoms in both conditions at follow-up and no differences between Dojo and the closely matched control game condition. Latent growth curve models did reveal a steeper decrease of personalized anxiety symptoms (not of total anxiety symptoms) in the Dojo condition compared to the control condition. Moderation analyses did not show any differences in outcomes between boys and girls nor did age differentiate outcomes. The present results are of importance for prevention science, as this was the first full-scale randomized controlled trial testing indicated prevention effects of a video game aimed at reducing anxiety. Future research should carefully consider the choice of control condition and outcome measurements, address the potentially high impact of participants’ expectations, and take critical design issues into consideration, such as individual- versus group-based intervention and contamination issues. PMID:26816292

  3. A Randomized Controlled Trial to Test the Effectiveness of an Immersive 3D Video Game for Anxiety Prevention among Adolescents.

    PubMed

    Scholten, Hanneke; Malmberg, Monique; Lobel, Adam; Engels, Rutger C M E; Granic, Isabela

    2016-01-01

    Adolescent anxiety is debilitating, the most frequently diagnosed adolescent mental health problem, and leads to substantial long-term problems. A randomized controlled trial (n = 138) was conducted to test the effectiveness of a biofeedback video game (Dojo) for adolescents with elevated levels of anxiety. Adolescents (11-15 years old) were randomly assigned to play Dojo or a control game (Rayman 2: The Great Escape). Initial screening for anxiety was done on 1,347 adolescents in five high schools; only adolescents who scored above the "at-risk" cut-off on the Spence Children Anxiety Survey were eligible. Adolescents' anxiety levels were assessed at pre-test, post-test, and at three month follow-up to examine the extent to which playing Dojo decreased adolescents' anxiety. The present study revealed equal improvements in anxiety symptoms in both conditions at follow-up and no differences between Dojo and the closely matched control game condition. Latent growth curve models did reveal a steeper decrease of personalized anxiety symptoms (not of total anxiety symptoms) in the Dojo condition compared to the control condition. Moderation analyses did not show any differences in outcomes between boys and girls nor did age differentiate outcomes. The present results are of importance for prevention science, as this was the first full-scale randomized controlled trial testing indicated prevention effects of a video game aimed at reducing anxiety. Future research should carefully consider the choice of control condition and outcome measurements, address the potentially high impact of participants' expectations, and take critical design issues into consideration, such as individual- versus group-based intervention and contamination issues.

  4. Numerical model of water flow and solute accumulation in vertisols using HYDRUS 2D/3D code

    NASA Astrophysics Data System (ADS)

    Weiss, Tomáš; Dahan, Ofer; Turkeltub, Tuvia

    2015-04-01

    boundary to the wall of the crack (so that the solute can accumulate due to evaporation on the crack block wall, and infiltrating fresh water can push the solute further down) - in order to do so, HYDRUS 2D/3D code had to be modified by its developers. Unconventionally, the main fitting parameters were: parameter a and n in the soil water retention curve and saturated hydraulic conductivity. The amount of infiltrated water (within a reasonable range), the infiltration function in the crack and the actual evaporation from the crack were also used as secondary fitting parameters. The model supports the previous findings that significant amount (~90%) of water from rain events must infiltrate through the crack. It was also noted that infiltration from the crack has to be increasing with depth and that the highest infiltration rate should be somewhere between 1-3m. This paper suggests a new way how to model vertisols in semi-arid regions. It also supports the previous findings about vertisols: especially, the utmost importance of soil cracks as preferential pathways for water and contaminants and soil cracks as deep evaporators.

  5. Sliding-window raptor codes for efficient scalable wireless video broadcasting with unequal loss protection.

    PubMed

    Cataldi, Pasquale; Grangetto, Marco; Tillo, Tammam; Magli, Enrico; Olmo, Gabriella

    2010-06-01

    Digital fountain codes have emerged as a low-complexity alternative to Reed-Solomon codes for erasure correction. The applications of these codes are relevant especially in the field of wireless video, where low encoding and decoding complexity is crucial. In this paper, we introduce a new class of digital fountain codes based on a sliding-window approach applied to Raptor codes. These codes have several properties useful for video applications, and provide better performance than classical digital fountains. Then, we propose an application of sliding-window Raptor codes to wireless video broadcasting using scalable video coding. The rates of the base and enhancement layers, as well as the number of coded packets generated for each layer, are optimized so as to yield the best possible expected quality at the receiver side, and providing unequal loss protection to the different layers according to their importance. The proposed system has been validated in a UMTS broadcast scenario, showing that it improves the end-to-end quality, and is robust towards fluctuations in the packet loss rate.

  6. Low-Complexity Saliency Detection Algorithm for Fast Perceptual Video Coding

    PubMed Central

    Liu, Pengyu; Jia, Kebin

    2013-01-01

    A low-complexity saliency detection algorithm for perceptual video coding is proposed; low-level encoding information is adopted as the characteristics of visual perception analysis. Firstly, this algorithm employs motion vector (MV) to extract temporal saliency region through fast MV noise filtering and translational MV checking procedure. Secondly, spatial saliency region is detected based on optimal prediction mode distributions in I-frame and P-frame. Then, it combines the spatiotemporal saliency detection results to define the video region of interest (VROI). The simulation results validate that the proposed algorithm can avoid a large amount of computation work in the visual perception characteristics analysis processing compared with other existing algorithms; it also has better performance in saliency detection for videos and can realize fast saliency detection. It can be used as a part of the video standard codec at medium-to-low bit-rates or combined with other algorithms in fast video coding. PMID:24489495

  7. Spatial Pyramid Covariance based Compact Video Code for Robust Face Retrieval in TV-series.

    PubMed

    Li, Yan; Wang, Ruiping; Cui, Zhen; Shan, Shiguang; Chen, Xilin

    2016-10-10

    We address the problem of face video retrieval in TV-series which searches video clips based on the presence of specific character, given one face track of his/her. This is tremendously challenging because on one hand, faces in TV-series are captured in largely uncontrolled conditions with complex appearance variations, and on the other hand retrieval task typically needs efficient representation with low time and space complexity. To handle this problem, we propose a compact and discriminative representation for the huge body of video data, named Compact Video Code (CVC). Our method first models the face track by its sample (i.e., frame) covariance matrix to capture the video data variations in a statistical manner. To incorporate discriminative information and obtain more compact video signature suitable for retrieval, the high-dimensional covariance representation is further encoded as a much lower-dimensional binary vector, which finally yields the proposed CVC. Specifically, each bit of the code, i.e., each dimension of the binary vector, is produced via supervised learning in a max margin framework, which aims to make a balance between the discriminability and stability of the code. Besides, we further extend the descriptive granularity of covariance matrix from traditional pixel-level to more general patchlevel, and proceed to propose a novel hierarchical video representation named Spatial Pyramid Covariance (SPC) along with a fast calculation method. Face retrieval experiments on two challenging TV-series video databases, i.e., the Big Bang Theory and Prison Break, demonstrate the competitiveness of the proposed CVC over state-of-the-art retrieval methods. In addition, as a general video matching algorithm, CVC is also evaluated in traditional video face recognition task on a standard Internet database, i.e., YouTube Celebrities, showing its quite promising performance by using an extremely compact code with only 128 bits.

  8. NIKE3D: an implicit, finite-deformation, finite element code for analyzing the static and dynamic response of three-dimensional solids

    SciTech Connect

    Hallquist, J.O.

    1981-01-01

    A user's manual is provided for NIKE3D, a fully implicit three-dimensional finite element code for analyzing the large deformation static and dynamic response of inelastic solids. A contact-impact algorithm permits gaps and sliding along material interfaces. By a specialization of this algorithm, such interfaces can be rigidly tied to admit variable zoning without the need of transition regions. Spatial discretization is achieved by the use of 8-node constant pressure solid elements. Bandwidth minimization is optional. Post-processors for NIKE3D include GRAPE for plotting deformed shapes and stress contours and DYNAP for plotting time histories.

  9. VTLOGANL: A Computer Program for Coding and Analyzing Data Gathered on Video Tape.

    ERIC Educational Resources Information Center

    Hecht, Jeffrey B.; And Others

    To code and analyze research data on videotape, a methodology is needed that allows the researcher to code directly and then analyze the observed degree of intensity of the observed events. The establishment of such a methodology is the next logical step in the development of the use of video recorded data in research. The Technological…

  10. Real-time transmission of digital video using variable-length coding

    NASA Technical Reports Server (NTRS)

    Bizon, Thomas P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1993-01-01

    Huffman coding is a variable-length lossless compression technique where data with a high probability of occurrence is represented with short codewords, while 'not-so-likely' data is assigned longer codewords. Compression is achieved when the high-probability levels occur so frequently that their benefit outweighs any penalty paid when a less likely input occurs. One instance where Huffman coding is extremely effective occurs when data is highly predictable and differential coding can be applied (as with a digital video signal). For that reason, it is desirable to apply this compression technique to digital video transmission; however, special care must be taken in order to implement a communication protocol utilizing Huffman coding. This paper addresses several of the issues relating to the real-time transmission of Huffman-coded digital video over a constant-rate serial channel. Topics discussed include data rate conversion (from variable to a fixed rate), efficient data buffering, channel coding, recovery from communication errors, decoder synchronization, and decoder architectures. A description of the hardware developed to execute Huffman coding and serial transmission is also included. Although this paper focuses on matters relating to Huffman-coded digital video, the techniques discussed can easily be generalized for a variety of applications which require transmission of variable-length data.

  11. Spatial resampling of IDR frames for low bitrate video coding with HEVC

    NASA Astrophysics Data System (ADS)

    Hosking, Brett; Agrafiotis, Dimitris; Bull, David; Easton, Nick

    2015-03-01

    As the demand for higher quality and higher resolution video increases, many applications fail to meet this demand due to low bandwidth restrictions. One factor contributing to this problem is the high bitrate requirement of the intra-coded Instantaneous Decoding Refresh (IDR) frames featuring in all video coding standards. Frequent coding of IDR frames is essential for error resilience in order to prevent the occurrence of error propagation. However, as each one consumes a huge portion of the available bitrate, the quality of future coded frames is hindered by high levels of compression. This work presents a new technique, known as Spatial Resampling of IDR Frames (SRIF), and shows how it can increase the rate distortion performance by providing a higher and more consistent level of video quality at low bitrates.

  12. TART 2000: A Coupled Neutron-Photon, 3-D, Combinatorial Geometry, Time Dependent, Monte Carlo Transport Code

    SciTech Connect

    Cullen, D.E

    2000-11-22

    TART2000 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input Preparation, running Monte Carlo calculations, and analysis of output results. TART2000 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART2000 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART2000 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART2000 and its data files.

  13. TART98 a coupled neutron-photon 3-D, combinatorial geometry time dependent Monte Carlo Transport code

    SciTech Connect

    Cullen, D E

    1998-11-22

    TART98 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART98 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART98 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART98 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART98 and its data files.

  14. Layered Wyner-Ziv video coding for transmission over unreliable channels

    NASA Astrophysics Data System (ADS)

    Xu, Qian; Stankovic, Vladimir; Xiong, Zixiang

    2005-07-01

    Based on recent works on Wyner-Ziv coding (or lossy source coding with decoder side information), we consider the case with noisy channel and addresses distributed joint source-channel coding, while targeting at the impor- tant application of scalable video transmission over wireless networks. In Wyner-Ziv coding, after quantization, Slepian-Wolf coding (SWC) is used to reduce the rate. SWC is traditionally realized by sending syndromes of a linear channel code. Since syndromes of the channel code can only compress but cannot protect, for transmission over noisy channels, additional error protection is needed. However, instead of using one channel code for SWC and one for error protection, our idea is to use a single channel code to achieve both compression and protection. We replace the traditional syndrome-based SWC scheme by the parity-based one, where only parity bits of the Slepian-Wolf channel code are sent. If the amount of transmitted parity bits increases above the Slepian-Wolf limit, the added redundancy is exploited to cope against the noise in the transmission channel. Using IRA codes for practical parity-based SWC, we design a novel layered Wyner-Ziv video coder which is robust to channel failures and thus very suitable for wireless communications. Our simulation results show great advantages of the proposed solution based on joint source-channel coding compared to the traditional approach where source and channel coding are performed separately.

  15. Adaptive λ estimation in Lagrangian rate-distortion optimization for video coding

    NASA Astrophysics Data System (ADS)

    Chen, Lulin; Garbacea, Ilie

    2006-01-01

    In this paper, adaptive Lagrangian multiplier λ estimation in Larangian R-D optimization for video coding is presented that is based on the ρ-domain linear rate model and distortion model. It yields that λ is a function of rate, distortion and coding input statistics and can be written as λ(R, D, σ2) = β(ln(σ2/D) + δ)D/R + k 0, with β, δ and k 0 as coding constants, σ2 is variance of prediction error input. λ(R, D, σ2) describes its ubiquitous relationship with coding statistics and coding input in hybrid video coding such as H.263, MPEG-2/4 and H.264/AVC. The lambda evaluation is de-coupled with quantization parameters. The proposed lambda estimation enables a fine encoder design and encoder control.

  16. Coding order decision of B frames for rate-distortion performance improvement in single-view video and multiview video coding.

    PubMed

    Kang, Je-Won; Lee, Young-Yoon; Kim, Chang-Su; Lee, Sang-Uk

    2010-08-01

    The coding gain that can be achieved by improving the coding order of B frames in the H.264/AVC standard is investigated in this work. We first represent the coding order of B frames and their reference frames with a binary tree. We then formulate a recursive equation to find out the binary tree that provides a suboptimal, but very efficient, coding order. The recursive equation is efficiently solved using a dynamic programming method. Furthermore, we extend the coding order improvement technique to the case of multiview video sequences, in which the quadtree representation is used instead of the binary tree representation. Simulation results demonstrate that the proposed algorithm provides significantly better R-D performance than conventional prediction structures.

  17. Single-layer HDR video coding with SDR backward compatibility

    NASA Astrophysics Data System (ADS)

    Lasserre, S.; François, E.; Le Léannec, F.; Touzé, D.

    2016-09-01

    The migration from High Definition (HD) TV to Ultra High Definition (UHD) is already underway. In addition to an increase of picture spatial resolution, UHD will bring more color and higher contrast by introducing Wide Color Gamut (WCG) and High Dynamic Range (HDR) video. As both Standard Dynamic Range (SDR) and HDR devices will coexist in the ecosystem, the transition from Standard Dynamic Range (SDR) to HDR will require distribution solutions supporting some level of backward compatibility. This paper presents a new HDR content distribution scheme, named SL-HDR1, using a single layer codec design and providing SDR compatibility. The solution is based on a pre-encoding HDR-to-SDR conversion, generating a backward compatible SDR video, with side dynamic metadata. The resulting SDR video is then compressed, distributed and decoded using standard-compliant decoders (e.g. HEVC Main 10 compliant). The decoded SDR video can be directly rendered on SDR displays without adaptation. Dynamic metadata of limited size are generated by the pre-processing and used to reconstruct the HDR signal from the decoded SDR video, using a post-processing that is the functional inverse of the pre-processing. Both HDR quality and artistic intent are preserved. Pre- and post-processing are applied independently per picture, do not involve any inter-pixel dependency, and are codec agnostic. Compression performance, and SDR quality are shown to be solidly improved compared to the non-backward and backward-compatible approaches, respectively using the Perceptual Quantization (PQ) and Hybrid Log Gamma (HLG) Opto-Electronic Transfer Functions (OETF).

  18. A two-level space-time color-coding method for 3D measurements using structured light

    NASA Astrophysics Data System (ADS)

    Xue, Qi; Wang, Zhao; Huang, Junhui; Gao, Jianmin; Qi, Zhaoshuai

    2015-11-01

    Color-coding methods have significantly improved the measurement efficiency of structured light systems. However, some problems, such as color crosstalk and chromatic aberration, decrease the measurement accuracy of the system. A two-level space-time color-coding method is thus proposed in this paper. The method, which includes a space-code level and a time-code level, is shown to be reliable and efficient. The influence of chromatic aberration is completely mitigated when using this method. Additionally, a self-adaptive windowed Fourier transform is used to eliminate all color crosstalk components. Theoretical analyses and experiments have shown that the proposed coding method solves the problems of color crosstalk and chromatic aberration effectively. Additionally, the method guarantees high measurement accuracy which is very close to the measurement accuracy using monochromatic coded patterns.

  19. Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction.

    PubMed

    Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin

    2016-01-01

    High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems.

  20. Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction

    PubMed Central

    Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin

    2016-01-01

    High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems. PMID:27814367

  1. Neural coding of 3D features of objects for hand action in the parietal cortex of the monkey.

    PubMed Central

    Sakata, H; Taira, M; Kusunoki, M; Murata, A; Tanaka, Y; Tsutsui, K

    1998-01-01

    In our previous studies of hand manipulation task-related neurons, we found many neurons of the parietal association cortex which responded to the sight of three-dimensional (3D) objects. Most of the task-related neurons in the AIP area (the lateral bank of the anterior intraparietal sulcus) were visually responsive and half of them responded to objects for manipulation. Most of these neurons were selective for the 3D features of the objects. More recently, we have found binocular visual neurons in the lateral bank of the caudal intraparietal sulcus (c-IPS area) that preferentially respond to a luminous bar or place at a particular orientation in space. We studied the responses of axis-orientation selective (AOS) neurons and surface-orientation selective (SOS) neurons in this area with stimuli presented on a 3D computer graphics display. The AOS neurons showed a stronger response to elongated stimuli and showed tuning to the orientation of the longitudinal axis. Many of them preferred a tilted stimulus in depth and appeared to be sensitive to orientation disparity and/or width disparity. The SOS neurons showed a stronger response to a flat than to an elongated stimulus and showed tuning to the 3D orientation of the surface. Their responses increased with the width or length of the stimulus. A considerable number of SOS neurons responded to a square in a random dot stereogram and were tuned to orientation in depth, suggesting their sensitivity to the gradient of disparity. We also found several SOS neurons that responded to a square with tilted or slanted contours, suggesting their sensitivity to orientation disparity and/or width disparity. Area c-IPS is likely to send visual signals of the 3D features of an object to area AIP for the visual guidance of hand actions. PMID:9770229

  2. Performance evaluation of the intra compression in the video coding standards

    NASA Astrophysics Data System (ADS)

    Abramowski, Andrzej

    2015-09-01

    The article presents a comparison of the Intra prediction algorithms in the current state-of-the-art video coding standards, including MJPEG 2000, VP8, VP9, H.264/AVC and H.265/HEVC. The effectiveness of techniques employed by each standard is evaluated in terms of compression efficiency and average encoding time. The compression efficiency is measured using BD-PSNR and BD-RATE metrics with H.265/HEVC results as an anchor. Tests are performed on a set of video sequences, composed of sequences gathered by Joint Collaborative Team on Video Coding during the development of the H.265/HEVC standard and 4K sequences provided by Ultra Video Group. According to results, H.265/HEVC provides significant bit-rate savings at the expense of computational complexity, while VP9 may be regarded as a compromise between the efficiency and required encoding time.

  3. MINVAR: a local optimization criterion for rate-distortion tradeoff in real time video coding

    NASA Astrophysics Data System (ADS)

    Chen, Zhenzhong; Ngan, King Ngi

    2005-10-01

    In this paper, we propose a minimum variation (MINVAR) distortion criterion based approach for the rate distortion tradeoff in video coding. The MINVAR based rate distortion tradeoff framework provides a local optimization strategy as a rate control mechanism in real time video coding applications by minimizing the distortion variation while the corresponding bit rate fluctuation is limited by utilizing the encoder buffer. We use the H.264 video codec to evaluate the performance of the proposed method. As shown in the simulation results, the decoded picture quality of the proposed approach is smoother than that of the traditional H.264 joint model (JM) rate control algorithm. The global video quality, the average PSNR, is maintained while a better subjective visual quality is guaranteed.

  4. A new type of color-coded light structures for an adapted and rapid determination of point correspondences for 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Caulier, Yannick; Bernhard, Luc; Spinnler, Klaus

    2011-05-01

    This paper proposes a new type of color coded light structures for the inspection of complex moving objects. The novelty of the methods relies on the generation of free-form color patterns permitting the projection of color structures adapted to the geometry of the surfaces to be characterized. The point correspondence determination algorithm consists of a stepwise procedure involving simple and computationally fast methods. The algorithm is therefore robust against varying recording conditions typically arising in real-time quality control environments and can be further integrated for industrial inspection purposes. The proposed approach is validated and compared on the basis of different experimentations concerning the 3D surface reconstruction by projecting adapted spatial color coded patterns. It is demonstrated that in case of certain inspection requirements, the method permits to code more reference points that similar color coded matrix methods.

  5. Application of the RNS3D Code to a Circular-Rectangular Transition Duct With and Without Inlet Swirl and Comparison with Experiments

    NASA Technical Reports Server (NTRS)

    Cavicchi, Richard H.

    1999-01-01

    Circular-rectangular transition ducts are used between engine exhausts and nozzles with rectangular cross sections that are designed for high performance aircraft. NASA Glenn Research Center has made experimental investigations of a series of circular-rectangular transition ducts to provide benchmark flow data for comparison with numerical calculations. These ducts are all designed with superellipse cross sections to facilitate grid generation. In response to this challenge, the three-dimensional RNS3D code has been applied to one of these transition ducts. This particular duct has a length-to-inlet diameter ratio of 1.5 and an exit-plane aspect ratio of 3.0. The inlet Mach number is 0.35. Two GRC experiments and the code were run for this duct without inlet swirl. One GRC experiment and the code were also run with inlet swirl. With no inlet swirl the code was successful in predicting pressures and secondary flow conditions, including a pair of counter-rotating vortices at both sidewalls of the exit plane. All these phenomena have been reported from the two GRC experiments. However, these vortices were suppressed in the one experiment when inlet swirl was used; whereas the RNS3D code still predicted them. The experiment was unable to provide data near the sidewalls, the very region where the vortices were predicted.

  6. SummitView 1.0: a code to automatically generate 3D solid models of surface micro-machining based MEMS designs.

    SciTech Connect

    McBride, Cory L. (Elemental Technologies, American Fort, UT); Yarberry, Victor R.; Schmidt, Rodney Cannon; Meyers, Ray J.

    2006-11-01

    This report describes the SummitView 1.0 computer code developed at Sandia National Laboratories. SummitView is designed to generate a 3D solid model, amenable to visualization and meshing, that represents the end state of a microsystem fabrication process such as the SUMMiT (Sandia Ultra-Planar Multilevel MEMS Technology) V process. Functionally, SummitView performs essentially the same computational task as an earlier code called the 3D Geometry modeler [1]. However, because SummitView is based on 2D instead of 3D data structures and operations, it has significant speed and robustness advantages. As input it requires a definition of both the process itself and the collection of individual 2D masks created by the designer and associated with each of the process steps. The definition of the process is contained in a special process definition file [2] and the 2D masks are contained in MEM format files [3]. The code is written in C++ and consists of a set of classes and routines. The classes represent the geometric data and the SUMMiT V process steps. Classes are provided for the following process steps: Planar Deposition, Planar Etch, Conformal Deposition, Dry Etch, Wet Etch and Release Etch. SummitView is built upon the 2D Boolean library GBL-2D [4], and thus contains all of that library's functionality.

  7. Solwnd: A 3D Compressible MHD Code for Solar Wind Studies. Version 1.0: Cartesian Coordinates

    NASA Technical Reports Server (NTRS)

    Deane, Anil E.

    1996-01-01

    Solwnd 1.0 is a three-dimensional compressible MHD code written in Fortran for studying the solar wind. Time-dependent boundary conditions are available. The computational algorithm is based on Flux Corrected Transport and the code is based on the existing code of Zalesak and Spicer. The flow considered is that of shear flow with incoming flow that perturbs this base flow. Several test cases corresponding to pressure balanced magnetic structures with velocity shear flow and various inflows including Alfven waves are presented. Version 1.0 of solwnd considers a rectangular Cartesian geometry. Future versions of solwnd will consider a spherical geometry. Some discussions of this issue is presented.

  8. Analysis of the beam halo in negative ion sources by using 3D3V PIC code.

    PubMed

    Miyamoto, K; Nishioka, S; Goto, I; Hatayama, A; Hanada, M; Kojima, A; Hiratsuka, J

    2016-02-01

    The physical mechanism of the formation of the negative ion beam halo and the heat loads of the multi-stage acceleration grids are investigated with the 3D PIC (particle in cell) simulation. The following physical mechanism of the beam halo formation is verified: The beam core and the halo consist of the negative ions extracted from the center and the periphery of the meniscus, respectively. This difference of negative ion extraction location results in a geometrical aberration. Furthermore, it is shown that the heat loads on the first acceleration grid and the second acceleration grid are quantitatively improved compared with those for the 2D PIC simulation result.

  9. Analysis of the beam halo in negative ion sources by using 3D3V PIC code

    SciTech Connect

    Miyamoto, K.; Nishioka, S.; Goto, I.; Hatayama, A.; Hanada, M.; Kojima, A.; Hiratsuka, J.

    2016-02-15

    The physical mechanism of the formation of the negative ion beam halo and the heat loads of the multi-stage acceleration grids are investigated with the 3D PIC (particle in cell) simulation. The following physical mechanism of the beam halo formation is verified: The beam core and the halo consist of the negative ions extracted from the center and the periphery of the meniscus, respectively. This difference of negative ion extraction location results in a geometrical aberration. Furthermore, it is shown that the heat loads on the first acceleration grid and the second acceleration grid are quantitatively improved compared with those for the 2D PIC simulation result.

  10. Efficient temporal and interlayer parameter prediction for weighted prediction in scalable high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Tsang, Sik-Ho; Chan, Yui-Lam; Siu, Wan-Chi

    2017-01-01

    Weighted prediction (WP) is an efficient video coding tool that was introduced since the establishment of the H.264/AVC video coding standard, for compensating the temporal illumination change in motion estimation and compensation. WP parameters, including a multiplicative weight and an additive offset for each reference frame, are required to be estimated and transmitted to the decoder by slice header. These parameters cause extra bits in the coded video bitstream. High efficiency video coding (HEVC) provides WP parameter prediction to reduce the overhead. Therefore, WP parameter prediction is crucial to research works or applications, which are related to WP. Prior art has been suggested to further improve the WP parameter prediction by implicit prediction of image characteristics and derivation of parameters. By exploiting both temporal and interlayer redundancies, we propose three WP parameter prediction algorithms, enhanced implicit WP parameter, enhanced direct WP parameter derivation, and interlayer WP parameter, to further improve the coding efficiency of HEVC. Results show that our proposed algorithms can achieve up to 5.83% and 5.23% bitrate reduction compared to the conventional scalable HEVC in the base layer for SNR scalability and 2× spatial scalability, respectively.

  11. Real-time video streaming using H.264 scalable video coding (SVC) in multihomed mobile networks: a testbed approach

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos

    2011-03-01

    Users of the next generation wireless paradigm known as multihomed mobile networks expect satisfactory quality of service (QoS) when accessing streamed multimedia content. The recent H.264 Scalable Video Coding (SVC) extension to the Advanced Video Coding standard (AVC), offers the facility to adapt real-time video streams in response to the dynamic conditions of multiple network paths encountered in multihomed wireless mobile networks. Nevertheless, preexisting streaming algorithms were mainly proposed for AVC delivery over multipath wired networks and were evaluated by software simulation. This paper introduces a practical, hardware-based testbed upon which we implement and evaluate real-time H.264 SVC streaming algorithms in a realistic multihomed wireless mobile networks environment. We propose an optimised streaming algorithm with multi-fold technical contributions. Firstly, we extended the AVC packet prioritisation schemes to reflect the three-dimensional granularity of SVC. Secondly, we designed a mechanism for evaluating the effects of different streamer 'read ahead window' sizes on real-time performance. Thirdly, we took account of the previously unconsidered path switching and mobile networks tunnelling overheads encountered in real-world deployments. Finally, we implemented a path condition monitoring and reporting scheme to facilitate the intelligent path switching. The proposed system has been experimentally shown to offer a significant improvement in PSNR of the received stream compared with representative existing algorithms.

  12. FACET: a radiation view factor computer code for axisymmetric, 2D planar, and 3D geometries with shadowing

    SciTech Connect

    Shapiro, A.B.

    1983-08-01

    The computer code FACET calculates the radiation geometric view factor (alternatively called shape factor, angle factor, or configuration factor) between surfaces for axisymmetric, two-dimensional planar and three-dimensional geometries with interposed third surface obstructions. FACET was developed to calculate view factors for input to finite-element heat-transfer analysis codes. The first section of this report is a brief review of previous radiation-view-factor computer codes. The second section presents the defining integral equation for the geometric view factor between two surfaces and the assumptions made in its derivation. Also in this section are the numerical algorithms used to integrate this equation for the various geometries. The third section presents the algorithms used to detect self-shadowing and third-surface shadowing between the two surfaces for which a view factor is being calculated. The fourth section provides a user's input guide followed by several example problems.

  13. Estimation of aortic valve leaflets from 3D CT images using local shape dictionaries and linear coding

    NASA Astrophysics Data System (ADS)

    Liang, Liang; Martin, Caitlin; Wang, Qian; Sun, Wei; Duncan, James

    2016-03-01

    Aortic valve (AV) disease is a significant cause of morbidity and mortality. The preferred treatment modality for severe AV disease is surgical resection and replacement of the native valve with either a mechanical or tissue prosthetic. In order to develop effective and long-lasting treatment methods, computational analyses, e.g., structural finite element (FE) and computational fluid dynamic simulations, are very effective for studying valve biomechanics. These computational analyses are based on mesh models of the aortic valve, which are usually constructed from 3D CT images though many hours of manual annotation, and therefore an automatic valve shape reconstruction method is desired. In this paper, we present a method for estimating the aortic valve shape from 3D cardiac CT images, which is represented by triangle meshes. We propose a pipeline for aortic valve shape estimation which includes novel algorithms for building local shape dictionaries and for building landmark detectors and curve detectors using local shape dictionaries. The method is evaluated on real patient image dataset using a leave-one-out approach and achieves an average accuracy of 0.69 mm. The work will facilitate automatic patient-specific computational modeling of the aortic valve.

  14. Development of a 3D FEL code for the simulation of a high-gain harmonic generation experiment.

    SciTech Connect

    Biedron, S. G.

    1999-02-26

    Over the last few years, there has been a growing interest in self-amplified spontaneous emission (SASE) free-electron lasers (FELs) as a means for achieving a fourth-generation light source. In order to correctly and easily simulate the many configurations that have been suggested, such as multi-segmented wigglers and the method of high-gain harmonic generation, we have developed a robust three-dimensional code. The specifics of the code, the comparison to the linear theory as well as future plans will be presented.

  15. Mariage des maillages: A new 3D general relativistic hydro code for simulation of gravitational waves from core-collapses.

    NASA Astrophysics Data System (ADS)

    Novak, Jerome; Dimmelmeier, Harrald; Font-Roda, Jose A.

    2004-12-01

    We present a new three-dimensional general relativistic hydrodynamics code which can be applied to study stellar core collapses and the resulting gravitational radiation. This code uses two different numerical techniques to solve partial differential equations arising in the model: high-resolution shock capturing (HRSC) schemes for the evolution of hydrodynamic quantities and spectral methods for the solution of Einstein equations. The equations are written and solved using spherical polar coordinates, best suited to stellar topology. Einstein equations are formulated within the 3+1 formalism and conformal flat condition (CFC) for the 3-metric and gravitational radiation is extracted using Newtonian quadrupole formulation.

  16. NIKE3D a nonlinear, implicit, three-dimensional finite element code for solid and structural mechanics user's manual update summary

    SciTech Connect

    Puso, M; Maker, B N; Ferencz, R M; Hallquist, J O

    2000-03-24

    This report provides the NIKE3D user's manual update summary for changes made from version 3.0.0 April 24, 1995 to version 3.3.6 March 24,2000. The updates are excerpted directly from the code printed output file (hence the Courier font and formatting), are presented in chronological order and delineated by NIKE3D version number. NIKE3D is a fully implicit three-dimensional finite element code for analyzing the finite strain static and dynamic response of inelastic solids, shells, and beams. Spatial discretization is achieved by the use of 8-node solid elements, 2-node truss and beam elements, and 4-node membrane and shell elements. Thirty constitutive models are available for representing a wide range of elastic, plastic, viscous, and thermally dependent material behavior. Contact-impact algorithms permit gaps, frictional sliding, and mesh discontinuities along material interfaces. Several nonlinear solution strategies are available, including Full-, Modified-, and Quasi-Newton methods. The resulting system of simultaneous linear equations is either solved iteratively by an element-by-element method, or directly by a direct factorization method.

  17. Just noticeable disparity error-based depth coding for three-dimensional video

    NASA Astrophysics Data System (ADS)

    Luo, Lei; Tian, Xiang; Chen, Yaowu

    2014-07-01

    A just noticeable disparity error (JNDE) measurement to describe the maximum tolerated error of depth maps is proposed. Any error of depth value inside the JNDE range would not cause a noticeable distortion observed by human eyes. The JNDE values are used to preprocess the original depth map in the prediction process during the depth coding and to adjust the prediction residues for further improvement of the coding quality. The proposed scheme can be incorporated in any standardized video coding algorithm based on prediction and transform. The experimental results show that the proposed method can achieve a 34% bit rate saving for depth video coding. Moreover, the perceptual quality of the synthesized view is also improved by the proposed method.

  18. Memory bandwidth-scalable motion estimation for mobile video coding

    NASA Astrophysics Data System (ADS)

    Hsieh, Jui-Hung; Tai, Wei-Cheng; Chang, Tian-Sheuan

    2011-12-01

    The heavy memory access of motion estimation (ME) execution consumes significant power and could limit ME execution when the available memory bandwidth (BW) is reduced because of access congestion or changes in the dynamics of the power environment of modern mobile devices. In order to adapt to the changing BW while maintaining the rate-distortion (R-D) performance, this article proposes a novel data BW-scalable algorithm for ME with mobile multimedia chips. The available BW is modeled in a R-D sense and allocated to fit the dynamic contents. The simulation result shows 70% BW savings while keeping equivalent R-D performance compared with H.264 reference software for low-motion CIF-sized video. For high-motion sequences, the result shows our algorithm can better use the available BW to save an average bit rate of up to 13% with up to 0.1-dB PSNR increase for similar BW usage.

  19. Experimental design and analysis of JND test on coded image/video

    NASA Astrophysics Data System (ADS)

    Lin, Joe Yuchieh; Jin, Lina; Hu, Sudeng; Katsavounidis, Ioannis; Li, Zhi; Aaron, Anne; Kuo, C.-C. Jay

    2015-09-01

    The visual Just-Noticeable-Difference (JND) metric is characterized by the detectable minimum amount of two visual stimuli. Conducting the subjective JND test is a labor-intensive task. In this work, we present a novel interactive method in performing the visual JND test on compressed image/video. JND has been used to enhance perceptual visual quality in the context of image/video compression. Given a set of coding parameters, a JND test is designed to determine the distinguishable quality level against a reference image/video, which is called the anchor. The JND metric can be used to save coding bitrates by exploiting the special characteristics of the human visual system. The proposed JND test is conducted using a binary-forced choice, which is often adopted to discriminate the difference in perception in a psychophysical experiment. The assessors are asked to compare coded image/video pairs and determine whether they are of the same quality or not. A bisection procedure is designed to find the JND locations so as to reduce the required number of comparisons over a wide range of bitrates. We will demonstrate the efficiency of the proposed JND test, report experimental results on the image and video JND tests.

  20. Customer oriented SNR scalability scheme for scalable video coding

    NASA Astrophysics Data System (ADS)

    Li, Z. G.; Rahardja, S.

    2005-07-01

    Let the whole region be the whole bit rate range that customers are interested in, and a sub-region be a specific bit rate range. The weighting factor of each sub-region is determined according to customers' interest. A new type of region of interest (ROI) is defined for the SNR scalability as the gap between the coding efficiency of SNR scalability scheme and that of the state-of-the-art single layer coding for a sub-region is a monotonically non-increasing function of its weighting factor. This type of ROI is used as a performance index to design a customer oriented SNR scalability scheme. Our scheme can be used to achieve an optimal customer oriented scalable tradeoff (COST). The profit can thus be maximized.

  1. An open-source Matlab code package for improved rank-reduction 3D seismic data denoising and reconstruction

    NASA Astrophysics Data System (ADS)

    Chen, Yangkang; Huang, Weilin; Zhang, Dong; Chen, Wei

    2016-10-01

    Simultaneous seismic data denoising and reconstruction is a currently popular research subject in modern reflection seismology. Traditional rank-reduction based 3D seismic data denoising and reconstruction algorithm will cause strong residual noise in the reconstructed data and thus affect the following processing and interpretation tasks. In this paper, we propose an improved rank-reduction method by modifying the truncated singular value decomposition (TSVD) formula used in the traditional method. The proposed approach can help us obtain nearly perfect reconstruction performance even in the case of low signal-to-noise ratio (SNR). The proposed algorithm is tested via one synthetic and field data examples. Considering that seismic data interpolation and denoising source packages are seldom in the public domain, we also provide a program template for the rank-reduction based simultaneous denoising and reconstruction algorithm by providing an open-source Matlab package.

  2. Region-of-interest based rate control for UAV video coding

    NASA Astrophysics Data System (ADS)

    Zhao, Chun-lei; Dai, Ming; Xiong, Jing-ying

    2016-05-01

    To meet the requirement of high-quality transmission of videos captured by unmanned aerial vehicles (UAV) with low bandwidth, a novel rate control (RC) scheme based on region-of-interest (ROI) is proposed. First, the ROI information is sent to the encoder with the latest high efficient video coding (HEVC) standard to generate an ROI map. Then, by using the ROI map, bit allocation methods are developed at frame level and large coding unit (LCU) level, to avoid inaccurate bit allocation produced by camera movement. At last, by using a better robustness R- λ model, the quantization parameter ( QP) for each LCU is calculated. The experimental results show that the proposed RC method can get a lower bitrate error and a higher quality for reconstructed video by choosing appropriate pixel weight on the HEVC platform.

  3. End-to-End Rate-Distortion Optimized MD Mode Selection for Multiple Description Video Coding

    NASA Astrophysics Data System (ADS)

    Heng, Brian A.; Apostolopoulos, John G.; Lim, Jae S.

    2006-12-01

    Multiple description (MD) video coding can be used to reduce the detrimental effects caused by transmission over lossy packet networks. A number of approaches have been proposed for MD coding, where each provides a different tradeoff between compression efficiency and error resilience. How effectively each method achieves this tradeoff depends on the network conditions as well as on the characteristics of the video itself. This paper proposes an adaptive MD coding approach which adapts to these conditions through the use of adaptive MD mode selection. The encoder in this system is able to accurately estimate the expected end-to-end distortion, accounting for both compression and packet loss-induced distortions, as well as for the bursty nature of channel losses and the effective use of multiple transmission paths. With this model of the expected end-to-end distortion, the encoder selects between MD coding modes in a rate-distortion (R-D) optimized manner to most effectively tradeoff compression efficiency for error resilience. We show how this approach adapts to both the local characteristics of the video and network conditions and demonstrates the resulting gains in performance using an H.264-based adaptive MD video coder.

  4. Time-Dependent Distribution Functions in C-Mod Calculated with the CQL3D-Hybrid-FOW, AORSA Full-Wave, and DC Lorentz Codes

    NASA Astrophysics Data System (ADS)

    Harvey, R. W. (Bob); Petrov, Yu. V.; Jaeger, E. F.; Berry, L. A.; Bonoli, P. T.; Bader, A.

    2015-11-01

    A time-dependent simulation of C-Mod pulsed ICRF power is made calculating minority hydrogen ion distribution functions with the CQL3D-Hybrid-FOW finite-orbit-width Fokker-Planck code. ICRF fields are calculated with the AORSA full wave code, and RF diffusion coefficients are obtained from these fields using the DC Lorentz gyro-orbit code. Prior results with a zero-banana-width simulation using the CQL3D/AORSA/DC time-cycles showed a pronounced enhancement of the H distribution in the perpendicular velocity direction compared to results obtained from Stix's quasilinear theory, in general agreement with experiment. The present study compares the new FOW results, including relevant gyro-radius effects, to determine the importance of these effects on the the NPA synthetic diagnostic time-dependence. The new NPA results give increased agreement with experiment, particularly in the ramp-down time after the ICRF pulse. Funded, through subcontract with Massachusetts Institute of Technology, by USDOE sponsored SciDAC Center for Simulation of Wave-Plasma Interactions.

  5. Revisiting the TORT Solutions to the NEA Suite of Benchmarks for 3D Transport Methods and Codes Over a Range in Parameter Space

    SciTech Connect

    Bekar, Kursat B; Azmy, Yousry

    2009-01-01

    Improved TORT solutions to the 3D transport codes' suite of benchmarks exercise are presented in this study. Preliminary TORT solutions to this benchmark indicate that the majority of benchmark quantities for most benchmark cases are computed with good accuracy, and that accuracy improves with model refinement. However, TORT fails to compute accurate results for some benchmark cases with aspect ratios drastically different from 1, possibly due to ray effects. In this work, we employ the standard approach of splitting the solution to the transport equation into an uncollided flux and a fully collided flux via the code sequence GRTUNCL3D and TORT to mitigate ray effects. The results of this code sequence presented in this paper show that the accuracy of most benchmark cases improved substantially. Furthermore, the iterative convergence problems reported for the preliminary TORT solutions have been resolved by bringing the computational cells' aspect ratio closer to unity and, more importantly, by using 64-bit arithmetic precision in the calculation sequence. Results of this study are also reported.

  6. On the numerical simulation of the ablative Rayleigh-Taylor instability in laser-driven ICF targets using the FastRad3D code

    NASA Astrophysics Data System (ADS)

    Bates, Jason; Schmitt, Andrew; Zalesak, Steve

    2015-11-01

    The ablative Rayleigh-Taylor (RT) instability is a key factor in the performance of directly-drive inertial-confinement-fusion (ICF) targets. Although this subject has been studied for quite some time, the accurate simulation of the ablative RT instability has proven to be a challenging task for many radiation hydrodynamics codes, particularly when it comes to capturing the ablatively-stabilized region of the linear dispersion spectrum and modeling ab initio perturbations. In this poster, we present results from recent two-dimensional numerical simulations of the ablative RT instability that were performed using the Eulerian code FastRad3D at the U.S. Naval Research Laboratory. We consider both planar and spherical geometries, low and moderate-Z target materials, different laser wavelengths and where possible, compare our findings with experiment data, linearized theory and/or results from other radiation hydrodynamics codes. Overall, we find that FastRad3D is capable of simulating the ablative RT instability quite accurately, although some uncertainties/discrepancies persist. We discuss these issues, as well as some of the numerical challenges associated with modeling this class of problems. Work supported by U.S. DOE/NNSA.

  7. Initial Self-Consistent 3D Electron-Cloud Simulations of the LHC Beam with the Code WARP+POSINST

    SciTech Connect

    Vay, J; Furman, M A; Cohen, R H; Friedman, A; Grote, D P

    2005-10-11

    We present initial results for the self-consistent beam-cloud dynamics simulations for a sample LHC beam, using a newly developed set of modeling capability based on a merge [1] of the three-dimensional parallel Particle-In-Cell (PIC) accelerator code WARP [2] and the electron-cloud code POSINST [3]. Although the storage ring model we use as a test bed to contain the beam is much simpler and shorter than the LHC, its lattice elements are realistically modeled, as is the beam and the electron cloud dynamics. The simulated mechanisms for generation and absorption of the electrons at the walls are based on previously validated models available in POSINST [3, 4].

  8. TRAC code assessment using data from SCTF Core-III, a large-scale 2D/3D facility

    SciTech Connect

    Boyack, B.E.; Shire, P.R.; Harmony, S.C.; Rhee, G.

    1988-01-01

    Nine tests from the SCTF Core-III configuration have been analyzed using TRAC-PF1/MOD1. The objectives of these assessment activities were to obtain a better understanding of the phenomena occurring during the refill and reflood phases of a large-break loss-of-coolant accident, to determine the accuracy to which key parameters are calculated, and to identify deficiencies in key code correlations and models that provide closure for the differential equations defining thermal-hydraulic phenomena in pressurized water reactors. Overall, the agreement between calculated and measured values of peak cladding temperature is reasonable. In addition, TRAC adequately predicts many of the trends observed in both the integral effect and separate effect tests conducted in SCTF Core-III. The importance of assessment activities that consider potential contributors to discrepancies between the measured and calculated results arising from three sources are described as those related to (1) knowledge about the facility configuration and operation, (2) facility modeling for code input, and (3) deficiencies in code correlations and models. An example is provided. 8 refs., 7 figs., 2 tabs.

  9. P1 adaptation of TRIPOLI-4® code for the use of 3D realistic core multigroup cross section generation

    NASA Astrophysics Data System (ADS)

    Cai, Li; Pénéliau, Yannick; Diop, Cheikh M.; Malvagi, Fausto

    2014-06-01

    In this paper, we discuss some improvements we recently implemented in the Monte-Carlo code TRIPOLI-4® associated with the homogenization and collapsing of subassemblies cross sections. The improvement offered us another approach to get critical multigroup cross sections with Monte-Carlo method. The new calculation method in TRIPOLI-4® tries to ensure the neutronic balances, the multiplicative factors and the critical flux spectra for some realistic geometries. We make it by at first improving the treatment of the energy transfer probability, the neutron excess weight and the neutron fission spectrum. This step is necessary for infinite geometries. The second step which will be enlarged in this paper is aimed at better dealing with the multigroup anisotropy distribution law for finite geometries. Usually, Monte-Carlo homogenized multi-group cross sections are validated within a core calculation by a deterministic code. Here, the validation of multigroup constants will also be carried out by Monte-Carlo core calculation code. Different subassemblies are tested with the new collapsing method, especially for the fast neutron reactors subassemblies.

  10. Lifting scheme-based method for joint coding 3D stereo digital cinema with luminace correction and optimized prediction

    NASA Astrophysics Data System (ADS)

    Darazi, R.; Gouze, A.; Macq, B.

    2009-01-01

    Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.

  11. Optimization of high-definition video coding and hybrid fiber-wireless transmission in the 60 GHz band.

    PubMed

    Lebedev, Alexander; Pham, Tien Thang; Beltrán, Marta; Yu, Xianbin; Ukhanova, Anna; Llorente, Roberto; Monroy, Idelfonso Tafur; Forchhammer, Søren

    2011-12-12

    The paper addresses the problem of distribution of high-definition video over fiber-wireless networks. The physical layer architecture with the low complexity envelope detection solution is investigated. We present both experimental studies and simulation of high quality high-definition compressed video transmission over 60 GHz fiber-wireless link. Using advanced video coding we satisfy low complexity and low delay constraints, meanwhile preserving the superb video quality after significantly extended wireless distance.

  12. Assessment of a 3-D boundary layer code to predict heat transfer and flow losses in a turbine

    NASA Technical Reports Server (NTRS)

    Vatsa, V. N.

    1983-01-01

    The prediction of the complete flow field in a turbine passage is an extremely difficult task due to the complex three dimensional pattern which contains separation and attachment lines, a saddle point and horseshoe vortex. Whereas, in principle such a problem can be solved using full Navier-Stokes equations, in reality methods based on a Navier-Stokes solution procedure encounter difficulty in accurately predicting surface quantities (e.g., heat transfer) due to grid limitations imposed by the speed and size of the existing computers. On the other hand the overall problem is strongly three dimensional and too complex to be analyzed by the current design methods based on inviscid and/or viscous strip theories. Thus there is a strong need for enhancing the current prediction techniques through inclusion of 3-D viscous effects. A potentially simple and cost effective way to achieve this is to use a prediction method based on three dimensional boundary layer (3-DBL) theory. The major objective of this program is to assess the applicability of such a 3-DBL approach for the prediction of heat loads, boundary layer growth, pressure losses and streamline skewing in critical areas of a turbine passage. A brief discussion of the physical problem addressed here along with the overall approach is presented.

  13. A fully-neoclassical finite-orbit-width version of the CQL3D Fokker-Planck code

    NASA Astrophysics Data System (ADS)

    Petrov, Yu V.; Harvey, R. W.

    2016-11-01

    The time-dependent bounce-averaged CQL3D flux-conservative finite-difference Fokker-Planck equation (FPE) solver has been upgraded to include finite-orbit-width (FOW) capabilities which are necessary for an accurate description of neoclassical transport, losses to the walls, and transfer of particles, momentum, and heat to the scrape-off layer. The FOW modifications are implemented in the formulation of the neutral beam source, collision operator, RF quasilinear diffusion operator, and in synthetic particle diagnostics. The collisional neoclassical radial transport appears naturally in the FOW version due to the orbit-averaging of local collision coefficients coupled with transformation coefficients from local (R, Z) coordinates along each guiding-center orbit to the corresponding midplane computational coordinates, where the FPE is solved. In a similar way, the local quasilinear RF diffusion terms give rise to additional radial transport of orbits. We note that the neoclassical results are obtained for ‘full’ orbits, not dependent on a common small orbit-width approximation. Results of validation tests for the FOW version are also presented.

  14. Joint source-channel coding for wireless object-based video communications utilizing data hiding.

    PubMed

    Wang, Haohong; Tsaftaris, Sotirios A; Katsaggelos, Aggelos K

    2006-08-01

    In recent years, joint source-channel coding for multimedia communications has gained increased popularity. However, very limited work has been conducted to address the problem of joint source-channel coding for object-based video. In this paper, we propose a data hiding scheme that improves the error resilience of object-based video by adaptively embedding the shape and motion information into the texture data. Within a rate-distortion theoretical framework, the source coding, channel coding, data embedding, and decoder error concealment are jointly optimized based on knowledge of the transmission channel conditions. Our goal is to achieve the best video quality as expressed by the minimum total expected distortion. The optimization problem is solved using Lagrangian relaxation and dynamic programming. The performance of the proposed scheme is tested using simulations of a Rayleigh-fading wireless channel, and the algorithm is implemented based on the MPEG-4 verification model. Experimental results indicate that the proposed hybrid source-channel coding scheme significantly outperforms methods without data hiding or unequal error protection.

  15. Microdosimetry of alpha particles for simple and 3D voxelised geometries using MCNPX and Geant4 Monte Carlo codes.

    PubMed

    Elbast, M; Saudo, A; Franck, D; Petitot, F; Desbrée, A

    2012-07-01

    Microdosimetry using Monte Carlo simulation is a suitable technique to describe the stochastic nature of energy deposition by alpha particle at cellular level. Because of its short range, the energy imparted by this particle to the targets is highly non-uniform. Thus, to achieve accurate dosimetric results, the modelling of the geometry should be as realistic as possible. The objectives of the present study were to validate the use of the MCNPX and Geant4 Monte Carlo codes for microdosimetric studies using simple and three-dimensional voxelised geometry and to study their limit of validity in this last case. To that aim, the specific energy (z) deposited in the cell nucleus, the single-hit density of specific energy f(1)(z) and the mean-specific energy were calculated. Results show a good agreement when compared with the literature using simple geometry. The maximum percentage difference found is <6 %. For voxelised phantom, the study of the voxel size highlighted that the shape of the curve f(1)(z) obtained with MCNPX for <1 µm voxel size presents a significant difference with the shape of non-voxelised geometry. When using Geant4, little differences are observed whatever the voxel size is. Below 1 µm, the use of Geant4 is required. However, the calculation time is 10 times higher with Geant4 than MCNPX code in the same conditions.

  16. Spatial correlation-based side information refinement for distributed video coding

    NASA Astrophysics Data System (ADS)

    Taieb, Mohamed Haj; Chouinard, Jean-Yves; Wang, Demin

    2013-12-01

    Distributed video coding (DVC) architecture designs, based on distributed source coding principles, have benefitted from significant progresses lately, notably in terms of achievable rate-distortion performances. However, a significant performance gap still remains when compared to prediction-based video coding schemes such as H.264/AVC. This is mainly due to the non-ideal exploitation of the video sequence temporal correlation properties during the generation of side information (SI). In fact, the decoder side motion estimation provides only an approximation of the true motion. In this paper, a progressive DVC architecture is proposed, which exploits the spatial correlation of the video frames to improve the motion-compensated temporal interpolation (MCTI). Specifically, Wyner-Ziv (WZ) frames are divided into several spatially correlated groups that are then sent progressively to the receiver. SI refinement (SIR) is performed as long as these groups are being decoded, thus providing more accurate SI for the next groups. It is shown that the proposed progressive SIR method leads to significant improvements over the Discover DVC codec as well as other SIR schemes recently introduced in the literature.

  17. An efficient coding scheme for surveillance videos captured by stationary cameras

    NASA Astrophysics Data System (ADS)

    Zhang, Xianguo; Liang, Luhong; Huang, Qian; Liu, Yazhou; Huang, Tiejun; Gao, Wen

    2010-07-01

    In this paper, a new scheme is presented to improve the coding efficiency of sequences captured by stationary cameras (or namely, static cameras) for video surveillance applications. We introduce two novel kinds of frames (namely background frame and difference frame) for input frames to represent the foreground/background without object detection, tracking or segmentation. The background frame is built using a background modeling procedure and periodically updated while encoding. The difference frame is calculated using the input frame and the background frame. A sequence structure is proposed to generate high quality background frames and efficiently code difference frames without delay, and then surveillance videos can be easily compressed by encoding the background frames and difference frames in a traditional manner. In practice, the H.264/AVC encoder JM 16.0 is employed as a build-in coding module to encode those frames. Experimental results on eight in-door and out-door surveillance videos show that the proposed scheme achieves 0.12 dB~1.53 dB gain in PSNR over the JM 16.0 anchor specially configured for surveillance videos.

  18. Transcoding method from H.264/AVC to high efficiency video coding based on similarity of intraprediction, interprediction, and motion vector

    NASA Astrophysics Data System (ADS)

    Liu, Mei-Feng; Zhong, Guo-Yun; He, Xiao-Hai; Qing, Lin-Bo

    2016-09-01

    Currently, most video resources on line are encoded in the H.264/AVC format. More fluent video transmission can be obtained if these resources are encoded in the newest international video coding standard: high efficiency video coding (HEVC). In order to improve the video transmission and storage on line, a transcoding method from H.264/AVC to HEVC is proposed. In this transcoding algorithm, the coding information of intraprediction, interprediction, and motion vector (MV) in H.264/AVC video stream are used to accelerate the coding in HEVC. It is found through experiments that the region of interprediction in HEVC overlaps that in H.264/AVC. Therefore, the intraprediction for the region in HEVC, which is interpredicted in H.264/AVC, can be skipped to reduce coding complexity. Several macroblocks in H.264/AVC are combined into one PU in HEVC when the MV difference between two of the macroblocks in H.264/AVC is lower than a threshold. This method selects only one coding unit depth and one prediction unit (PU) mode to reduce the coding complexity. An MV interpolation method of combined PU in HEVC is proposed according to the areas and distances between the center of one macroblock in H.264/AVC and that of the PU in HEVC. The predicted MV accelerates the motion estimation for HEVC coding. The simulation results show that our proposed algorithm achieves significant coding time reduction with a little loss in bitrates distortion rate, compared to the existing transcoding algorithms and normal HEVC coding.

  19. Comparison of the 3D VERB Code Simulations of the Dynamic Evolution of the Outer and Inner Radiation Belts With the Reanalysis Obtained from Observations on Multiple Spacecraft

    NASA Astrophysics Data System (ADS)

    Shprits, Y.; Subbotin, D.; Ni, B.; Daae, M.; Kondrashov, D. A.; Hartinger, M.; Kim, K.; Orlova, K.; Nagai, T.; Friedel, R. H.; Chen, Y.

    2010-12-01

    In this study we present simulations of the inner and outer radiation belts using the Versatile Electron Radiation Belt (VERB) accounting for radial, pitch-angle, energy, and mixed diffusion. Qusi-linear diffusion coefficients are computed using the Full Diffusion Code (FDC) due to day-side and night-side chorus waves, magneto-sonic waves, phasmaspheric hiss waves, EMIC and hiss waves in the regions of plumes, lightning generated whistlers and anthropogenic whistlers. Sensitivity simulations show that the knowledge of wave spectral properties and spacial distribution of waves is crucially important for reproducing long term observations. The 3D VERB code simulations are compared to 3D reanalysis of the radiation belt fluxes obtained by blending the predictive model with observations from LANL GEO, CRRES, Akebono, and GPS. We also discuss the initial results of coupled RCM-VERB simulations. Finally, we present a statistical analysis of radiation belt phase space density obtained from reanalysis to explore sudden drop outs of the radiation belt fluxes and location of peaks in phase space density. The application of the developed tools to future measurements on board RBSP is discussed.

  20. Comparison of wavelet and Karhunen-Loeve transforms in video compression applications

    NASA Astrophysics Data System (ADS)

    Musatenko, Yurij S.; Soloveyko, Olexandr M.; Kurashov, Vitalij N.

    1999-12-01

    In the paper we present comparison of three advanced techniques for video compression. Among them 3D Embedded Zerotree Wavelet (EZW) coding, recently suggested Optimal Image Coding using Karhunen-Loeve (KL) transform (OICKL) and new algorithm of video compression based on 3D EZW coding scheme but with using KL transform for frames decorrelation (3D-EZWKL). It is shown that OICKL technique provides the best performance and usage of KL transform with 3D-EZW coding scheme gives better results than just usage of 3D-EZW algorithm.

  1. 3D World Building System

    ScienceCinema

    None

    2016-07-12

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  2. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  3. NASA low-speed centrifugal compressor for 3-D viscous code assessment and fundamental flow physics research

    NASA Technical Reports Server (NTRS)

    Hathaway, M. D.; Wood, J. R.; Wasserbauer, C. A.

    1991-01-01

    A low speed centrifugal compressor facility recently built by the NASA Lewis Research Center is described. The purpose of this facility is to obtain detailed flow field measurements for computational fluid dynamic code assessment and flow physics modeling in support of Army and NASA efforts to advance small gas turbine engine technology. The facility is heavily instrumented with pressure and temperature probes, both in the stationary and rotating frames of reference, and has provisions for flow visualization and laser velocimetry. The facility will accommodate rotational speeds to 2400 rpm and is rated at pressures to 1.25 atm. The initial compressor stage being tested is geometrically and dynamically representative of modern high-performance centrifugal compressor stages with the exception of Mach number levels. Preliminary experimental investigations of inlet and exit flow uniformly and measurement repeatability are presented. These results demonstrate the high quality of the data which may be expected from this facility. The significance of synergism between computational fluid dynamic analysis and experimentation throughout the development of the low speed centrifugal compressor facility is demonstrated.

  4. Optimized sign language video coding based on eye-tracking analysis

    NASA Astrophysics Data System (ADS)

    Agrafiotis, Dimitris; Canagarajah, C. N.; Bull, David R.; Dye, Matt; Twyford, Helen; Kyle, Jim; Chung How, James

    2003-06-01

    The imminent arrival of mobile video telephony will enable deaf people to communicate - as hearing people have been able to do for a some time now - anytime/anywhere in their own language sign language. At low bit rates coding of sign language sequences is very challenging due to the high level of motion and the need to maintain good image quality to aid with understanding. This paper presents optimised coding of sign language video at low bit rates in a way that will favour comprehension of the compressed material by deaf users. Our coding suggestions are based on an eye-tracking study that we have conducted which allows us to analyse the visual attention of sign language viewers. The results of this study are included in this paper. Analysis and results for two coding methods, one using MPEG-4 video objects and the second using foveation filtering are presented. Results with foveation filtering are very promising, offering a considerable decrease in bit rate in a way which is compatible with the visual attention patterns of deaf people, as these were recorded in the eye tracking study.

  5. On scalable lossless video coding based on sub-pixel accurate MCTF

    NASA Astrophysics Data System (ADS)

    Yea, Sehoon; Pearlman, William A.

    2006-01-01

    We propose two approaches to scalable lossless coding of motion video. They achieve SNR-scalable bitstream up to lossless reconstruction based upon the subpixel-accurate MCTF-based wavelet video coding. The first approach is based upon a two-stage encoding strategy where a lossy reconstruction layer is augmented by a following residual layer in order to obtain (nearly) lossless reconstruction. The key advantages of our approach include an 'on-the-fly' determination of bit budget distribution between the lossy and the residual layers, freedom to use almost any progressive lossy video coding scheme as the first layer and an added feature of near-lossless compression. The second approach capitalizes on the fact that we can maintain the invertibility of MCTF with an arbitrary sub-pixel accuracy even in the presence of an extra truncation step for lossless reconstruction thanks to the lifting implementation. Experimental results show that the proposed schemes achieve compression ratios not obtainable by intra-frame coders such as Motion JPEG-2000 thanks to their inter-frame coding nature. Also they are shown to outperform the state-of-the-art non-scalable inter-frame coder H.264 (JM) lossless mode, with the added benefit of bitstream embeddedness.

  6. Comparison of a 3-D multi-group SN particle transport code with Monte Carlo for intracavitary brachytherapy of the cervix uteri.

    PubMed

    Gifford, Kent A; Wareing, Todd A; Failla, Gregory; Horton, John L; Eifel, Patricia J; Mourtada, Firas

    2009-12-03

    A patient dose distribution was calculated by a 3D multi-group S N particle transport code for intracavitary brachytherapy of the cervix uteri and compared to previously published Monte Carlo results. A Cs-137 LDR intracavitary brachytherapy CT data set was chosen from our clinical database. MCNPX version 2.5.c, was used to calculate the dose distribution. A 3D multi-group S N particle transport code, Attila version 6.1.1 was used to simulate the same patient. Each patient applicator was built in SolidWorks, a mechanical design package, and then assembled with a coordinate transformation and rotation for the patient. The SolidWorks exported applicator geometry was imported into Attila for calculation. Dose matrices were overlaid on the patient CT data set. Dose volume histograms and point doses were compared. The MCNPX calculation required 14.8 hours, whereas the Attila calculation required 22.2 minutes on a 1.8 GHz AMD Opteron CPU. Agreement between Attila and MCNPX dose calculations at the ICRU 38 points was within +/- 3%. Calculated doses to the 2 cc and 5 cc volumes of highest dose differed by not more than +/- 1.1% between the two codes. Dose and DVH overlays agreed well qualitatively. Attila can calculate dose accurately and efficiently for this Cs-137 CT-based patient geometry. Our data showed that a three-group cross-section set is adequate for Cs-137 computations. Future work is aimed at implementing an optimized version of Attila for radiotherapy calculations.

  7. Multiview video codec based on KTA techniques

    NASA Astrophysics Data System (ADS)

    Seo, Jungdong; Kim, Donghyun; Ryu, Seungchul; Sohn, Kwanghoon

    2011-03-01

    Multi-view video coding (MVC) is a video coding standard developed by MPEG and VCEG for multi-view video. It showed average PSNR gain of 1.5dB compared with view-independent coding by H.264/AVC. However, because resolutions of multi-view video are getting higher for more realistic 3D effect, high performance video codec is needed. MVC adopted hierarchical B-picture structure and inter-view prediction as core techniques. The hierarchical B-picture structure removes the temporal redundancy, and the inter-view prediction reduces the inter-view redundancy by compensated prediction from the reconstructed neighboring views. Nevertheless, MVC has inherent limitation in coding efficiency, because it is based on H.264/AVC. To overcome the limit, an enhanced video codec for multi-view video based on Key Technology Area (KTA) is proposed. KTA is a high efficiency video codec by Video Coding Expert Group (VCEG), and it was carried out for coding efficiency beyond H.264/AVC. The KTA software showed better coding gain than H.264/AVC by using additional coding techniques. The techniques and the inter-view prediction are implemented into the proposed codec, which showed high coding gain compared with the view-independent coding result by KTA. The results presents that the inter-view prediction can achieve higher efficiency in a multi-view video codec based on a high performance video codec such as HEVC.

  8. Mixture block coding with progressive transmission in packet video. Appendix 1: Item 2. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Chen, Yun-Chung

    1989-01-01

    Video transmission will become an important part of future multimedia communication because of dramatically increasing user demand for video, and rapid evolution of coding algorithm and VLSI technology. Video transmission will be part of the broadband-integrated services digital network (B-ISDN). Asynchronous transfer mode (ATM) is a viable candidate for implementation of B-ISDN due to its inherent flexibility, service independency, and high performance. According to the characteristics of ATM, the information has to be coded into discrete cells which travel independently in the packet switching network. A practical realization of an ATM video codec called Mixture Block Coding with Progressive Transmission (MBCPT) is presented. This variable bit rate coding algorithm shows how a constant quality performance can be obtained according to user demand. Interactions between codec and network are emphasized including packetization, service synchronization, flow control, and error recovery. Finally, some simulation results based on MBCPT coding with error recovery are presented.

  9. An edge-based temporal error concealment for MPEG-coded video

    NASA Astrophysics Data System (ADS)

    Huang, Yu-Len; Lien, Hsiu-Yi

    2005-07-01

    When transmitted over unreliable channels, the compressed video can suffer severe degradation. Some strategies were employed to make an acceptable quality of the decoded image sequence. Error concealment (EC) technique is one of effective approaches to diminish the quality degradation. A number of EC algorithms have been developed to combat the transmission errors for MPEG-coded video. These methods always work well to reconstruct the smooth or regular damaged macroblocks. However, for damaged macroblocks were irregular or high-detail, the reconstruction may follow noticeable blurring consequence or not match well with the surrounding macroblocks. This paper proposes an edgebased temporal EC model to conceal the errors. In the proposed method, both the spatial and the temporal contextual features in compressed video are measured by using an edge detector, i.e. Sobel operator. The edge information surrounding a damaged macroblock is utilized to estimate the lost motion vectors based on the boundary matching technique. Next, the estimated motion vectors are used to reconstruct the damaged macroblock by exploiting the information in reference frames. In comparison with traditional EC algorithms, the proposed method provides a significant improvement on both objective peak signal-to-noise ratio (PSNR) measurement and subjective visual quality of MPEG-coded video.

  10. LLNL-Earth3D

    SciTech Connect

    2013-10-01

    Earth3D is a computer code designed to allow fast calculation of seismic rays and travel times through a 3D model of the Earth. LLNL is using this for earthquake location and global tomography efforts and such codes are of great interest to the Earth Science community.

  11. Efficient depth intraprediction method for H.264/AVC-based three-dimensional video coding

    NASA Astrophysics Data System (ADS)

    Oh, Kwan-Jung; Oh, Byung Tae

    2015-04-01

    We present an intracoding method that is applicable to depth map coding in multiview plus depth systems. Our approach combines skip prediction and plane segmentation-based prediction. The proposed depth intraskip prediction uses the estimated direction at both the encoder and decoder, and does not need to encode residual data. Our plane segmentation-based intraprediction divides the current block into biregions, and applies a different prediction scheme for each segmented region. This method avoids incorrect estimations across different regions, resulting in higher prediction accuracy. Simulation results demonstrate that the proposed scheme is superior to H.264/advanced video coding intraprediction and has the ability to improve the subjective rendering quality.

  12. Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos

    NASA Astrophysics Data System (ADS)

    Xu, Dawen; Wang, Rangding

    2015-05-01

    A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.

  13. SPECT Imaging of 2-D and 3-D Distributed Sources with Near-Field Coded Aperture Collimation: Computer Simulation and Real Data Validation.

    PubMed

    Mu, Zhiping; Dobrucki, Lawrence W; Liu, Yi-Hwa

    The imaging of distributed sources with near-field coded aperture (CA) remains extremely challenging and is broadly considered unsuitable for single-photon emission computerized tomography (SPECT). This study proposes a novel CA SPECT reconstruction approach and evaluates the feasibilities of imaging and reconstructing distributed hot sources and cold lesions using near-field CA collimation and iterative image reconstruction. Computer simulations were designed to compare CA and pinhole collimations in two-dimensional radionuclide imaging. Digital phantoms were created and CA images of the phantoms were reconstructed using maximum likelihood expectation maximization (MLEM). Errors and the contrast-to-noise ratio (CNR) were calculated and image resolution was evaluated. An ex vivo rat heart with myocardial infarction was imaged using a micro-SPECT system equipped with a custom-made CA module and a commercial 5-pinhole collimator. Rat CA images were reconstructed via the three-dimensional (3-D) MLEM algorithm developed for CA SPECT with and without correction for a large projection angle, and 5-pinhole images were reconstructed using the commercial software provided by the SPECT system. Phantom images of CA were markedly improved in terms of image quality, quantitative root-mean-squared error, and CNR, as compared to pinhole images. CA and pinhole images yielded similar image resolution, while CA collimation resulted in fewer noise artifacts. CA and pinhole images of the rat heart were well reconstructed and the myocardial perfusion defects could be clearly discerned from 3-D CA and 5-pinhole SPECT images, whereas 5-pinhole SPECT images suffered from severe noise artifacts. Image contrast of CA SPECT was further improved after correction for the large projection angle used in the rat heart imaging. The computer simulations and small-animal imaging study presented herein indicate that the proposed 3-D CA SPECT imaging and reconstruction approaches worked reasonably

  14. Validation of 3D Code KATRIN For Fast Neutron Fluence Calculation of VVER-1000 Reactor Pressure Vessel by Ex-Vessel Measurements and Surveillance Specimens Results

    NASA Astrophysics Data System (ADS)

    Dzhalandinov, A.; Tsofin, V.; Kochkin, V.; Panferov, P.; Timofeev, A.; Reshetnikov, A.; Makhotin, D.; Erak, D.; Voloschenko, A.

    2016-02-01

    Usually the synthesis of two-dimensional and one-dimensional discrete ordinate calculations is used to evaluate neutron fluence on VVER-1000 reactor pressure vessel (RPV) for prognosis of radiation embrittlement. But there are some cases when this approach is not applicable. For example the latest projects of VVER-1000 have upgraded surveillance program. Containers with surveillance specimens are located on the inner surface of RPV with fast neutron flux maximum. Therefore, the synthesis approach is not suitable enough for calculation of local disturbance of neutron field in RPV inner surface behind the surveillance specimens because of their complicated and heterogeneous structure. In some cases the VVER-1000 core loading consists of fuel assemblies with different fuel height and the applicability of synthesis approach is also ambiguous for these fuel cycles. Also, the synthesis approach is not enough correct for the neutron fluence estimation at the RPV area above core top. Because of these reasons only the 3D neutron transport codes seem to be satisfactory for calculation of neutron fluence on the VVER-1000 RPV. The direct 3D calculations are also recommended by modern regulations.

  15. Validation of the RPLUS3D Code for Supersonic Inlet Applications Involving Three-Dimensional Shock Wave-Boundary Layer Interactions

    NASA Technical Reports Server (NTRS)

    Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.

    1994-01-01

    A three-dimensional computational fluid dynamics code, RPLUS3D, which was developed for the reactive propulsive flows of ramjets and scramjets, was validated for glancing shock wave-boundary layer interactions. Both laminar and turbulent flows were studied. A supersonic flow over a wedge mounted on a flat plate was numerically simulated. For the laminar case, the static pressure distribution, velocity vectors, and particle traces on the flat plate were obtained. For turbulent flow, both the Baldwin-Lomax and Chien two-equation turbulent models were used. The static pressure distributions, pitot pressure, and yaw angle profiles were computed. In addition, the velocity vectors and particle traces on the flat plate were also obtained from the computed solution. Overall, the computed results for both laminar and turbulent cases compared very well with the experimentally obtained data.

  16. Frozen Rotor and Sliding Mesh Models Applied to the 3D Simulation of the Francis-99 Tokke Turbine with Code_Saturne

    NASA Astrophysics Data System (ADS)

    Tonello, N.; Eude, Y.; de Laage de Meux, B.; Ferrand, M.

    2017-01-01

    The steady-state operation of the Francis-99, Tokke turbine [1-3] has been simulated numerically at different loads using the open source, CAD and CFD software, SALOME [4] Code_Saturne [5]. The full 3D mesh of the Tokke turbine provided for the Second Francis-99 Workshop has been adapted and modified to work with the solver. Results are compared for the frozen-rotor and the unsteady, conservative sliding mesh approach over three operating points, showing that good agreement with the experimental data is obtained with both models without having to tune the CFD models for each operating point. Approaches to the simulation of transient operation are also presented with results of work in progress.

  17. Impact of event-specific chorus wave realization for modeling the October 8-9, 2012, event using the LANL DREAM3D diffusion code

    NASA Astrophysics Data System (ADS)

    Cunningham, G.; Tu, W.; Chen, Y.; Reeves, G. D.; Henderson, M. G.; Baker, D. N.; Blake, J. B.; Spence, H.

    2013-12-01

    During the interval October 8-9, 2012, the phase-space density (PSD) of high-energy electrons exhibited a dropout preceding an intense enhancement observed by the MagEIS and REPT instruments aboard the Van Allen Probes. The evolution of the PSD suggests heating by chorus waves, which were observed to have high intensities at the time of the enhancement [1]. Although intense chorus waves were also observed during the first Dst dip on October 8, no PSD enhancement was observed at this time. We demonstrate a quantitative reproduction of the entire event that makes use of three recent modifications to the LANL DREAM3D diffusion code: 1) incorporation of a time-dependent, low-energy, boundary condition from the MagEIS instrument, 2) use of a time-dependent estimate of the chorus wave intensity derived from observations of POES low-energy electron precipitation, and 3) use of an estimate of the last closed drift shell, beyond which electrons are assumed to have a lifetime that is proportional to their drift period around earth. The key features of the event are quantitatively reproduced by the simulation, including the dropout on October 8, and a rapid increase in PSD early on October 9, with a peak near L*=4.2. The DREAM3D code predicts the dropout on October 8 because this feature is dominated by magnetospheric compression and outward radial diffusion-the L* of the last closed drift-shell reaches a minimum value of 5.33 at 1026 UT on October 8. We find that a ';statistical' wave model based on historical CRRES measurements binned in AE* does not reproduce the enhancement because the peak wave amplitudes are only a few 10's of pT, whereas an ';event-specific' model reproduces both the magnitude and timing of the enhancement very well, a s shown in the Figure, because the peak wave amplitudes are 10x higher. [1] 'Electron Acceleration in the Heart of the Van Allen Radiation Belts', G. D. Reeves et al., Science 1237743, Published online 25 July 2013 [DOI:10.1126/science

  18. Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images.

    PubMed

    Botta, F; Mairani, A; Hobbs, R F; Vergara Gil, A; Pacilio, M; Parodi, K; Cremonesi, M; Coca Pérez, M A; Di Dia, A; Ferrari, M; Guerriero, F; Battistoni, G; Pedroli, G; Paganelli, G; Torres Aroche, L A; Sgouros, G

    2013-11-21

    Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 10(8) primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3–4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image

  19. Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images*

    PubMed Central

    Botta, F; Mairani, A; Hobbs, R F; Vergara Gil, A; Pacilio, M; Parodi, K; Cremonesi, M; Coca Pérez, M A; Di Dia, A; Ferrari, M; Guerriero, F; Battistoni, G; Pedroli, G; Paganelli, G; Torres Aroche, L A; Sgouros, G

    2014-01-01

    Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 108 primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3–4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image

  20. Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images

    NASA Astrophysics Data System (ADS)

    Botta, F.; Mairani, A.; Hobbs, R. F.; Vergara Gil, A.; Pacilio, M.; Parodi, K.; Cremonesi, M.; Coca Pérez, M. A.; Di Dia, A.; Ferrari, M.; Guerriero, F.; Battistoni, G.; Pedroli, G.; Paganelli, G.; Torres Aroche, L. A.; Sgouros, G.

    2013-11-01

    Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 108 primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3-4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image

  1. User-action-driven view and rate scalable multiview video coding.

    PubMed

    Chakareski, Jacob; Velisavljevic, Vladan; Stankovic, Vladimir

    2013-09-01

    We derive an optimization framework for joint view and rate scalable coding of multi-view video content represented in the texture plus depth format. The optimization enables the sender to select the subset of coded views and their encoding rates such that the aggregate distortion over a continuum of synthesized views is minimized. We construct the view and rate embedded bitstream such that it delivers optimal performance simultaneously over a discrete set of transmission rates. In conjunction, we develop a user interaction model that characterizes the view selection actions of the client as a Markov chain over a discrete state-space. We exploit the model within the context of our optimization to compute user-action-driven coding strategies that aim at enhancing the client's performance in terms of latency and video quality. Our optimization outperforms the state-of-the-art H.264 SVC codec as well as a multi-view wavelet-based coder equipped with a uniform rate allocation strategy, across all scenarios studied in our experiments. Equally important, we can achieve an arbitrarily fine granularity of encoding bit rates, while providing a novel functionality of view embedded encoding, unlike the other encoding methods that we examined. Finally, we observe that the interactivity-aware coding delivers superior performance over conventional allocation techniques that do not anticipate the client's view selection actions in their operation.

  2. Development of the 3D Parallel Particle-In-Cell Code IMPACT to Simulate the Ion Beam Transport System of VENUS (Abstract)

    SciTech Connect

    Qiang, J.; Leitner, D.; Todd, D.S.; Ryne, R.D.

    2005-03-15

    The superconducting ECR ion source VENUS serves as the prototype injector ion source for the Rare Isotope Accelerator (RIA) driver linac. The RIA driver linac requires a great variety of high charge state ion beams with up to an order of magnitude higher intensity than currently achievable with conventional ECR ion sources. In order to design the beam line optics of the low energy beam line for the RIA front end for the wide parameter range required for the RIA driver accelerator, reliable simulations of the ion beam extraction from the ECR ion source through the ion mass analyzing system are essential. The RIA low energy beam transport line must be able to transport intense beams (up to 10 mA) of light and heavy ions at 30 keV.For this purpose, LBNL is developing the parallel 3D particle-in-cell code IMPACT to simulate the ion beam transport from the ECR extraction aperture through the analyzing section of the low energy transport system. IMPACT, a parallel, particle-in-cell code, is currently used to model the superconducting RF linac section of RIA and is being modified in order to simulate DC beams from the ECR ion source extraction. By using the high performance of parallel supercomputing we will be able to account consistently for the changing space charge in the extraction region and the analyzing section. A progress report and early results in the modeling of the VENUS source will be presented.

  3. Development of the 3D Parallel Particle-In-Cell Code IMPACT to Simulate the Ion Beam Transport System of VENUS (Abstract)

    NASA Astrophysics Data System (ADS)

    Qiang, J.; Leitner, D.; Todd, D. S.; Ryne, R. D.

    2005-03-01

    The superconducting ECR ion source VENUS serves as the prototype injector ion source for the Rare Isotope Accelerator (RIA) driver linac. The RIA driver linac requires a great variety of high charge state ion beams with up to an order of magnitude higher intensity than currently achievable with conventional ECR ion sources. In order to design the beam line optics of the low energy beam line for the RIA front end for the wide parameter range required for the RIA driver accelerator, reliable simulations of the ion beam extraction from the ECR ion source through the ion mass analyzing system are essential. The RIA low energy beam transport line must be able to transport intense beams (up to 10 mA) of light and heavy ions at 30 keV. For this purpose, LBNL is developing the parallel 3D particle-in-cell code IMPACT to simulate the ion beam transport from the ECR extraction aperture through the analyzing section of the low energy transport system. IMPACT, a parallel, particle-in-cell code, is currently used to model the superconducting RF linac section of RIA and is being modified in order to simulate DC beams from the ECR ion source extraction. By using the high performance of parallel supercomputing we will be able to account consistently for the changing space charge in the extraction region and the analyzing section. A progress report and early results in the modeling of the VENUS source will be presented.

  4. Analysis of 3D and multiview extensions of the emerging HEVC standard

    NASA Astrophysics Data System (ADS)

    Vetro, Anthony; Tian, Dong

    2012-10-01

    Standardization of a new set of 3D formats has been initiated with the goal of improving the coding of stereo and multiview video, and also facilitating the generation of multiview output needed for auto-stereoscopic displays. Part of this effort will develop 3D and multiview extensions of the emerging standard for High Efficiency Video Coding (HEVC). This paper outlines some of the key technologies and architectures being considered for standardization, and analyzes the viability, benefits and drawbacks of different codec designs.

  5. Implementation of agronomical and geochemical modules into a 3D groundwater code for assessing nitrate storage and transport through unconfined Chalk aquifer

    NASA Astrophysics Data System (ADS)

    Picot-Colbeaux, Géraldine; Devau, Nicolas; Thiéry, Dominique; Pettenati, Marie; Surdyk, Nicolas; Parmentier, Marc; Amraoui, Nadia; Crastes de Paulet, François; André, Laurent

    2016-04-01

    Chalk aquifer is the main water resource for domestic water supply in many parts in northern France. In same basin, groundwater is frequently affected by quality problems concerning nitrates. Often close to or above the drinking water standards, nitrate concentration in groundwater is mainly due to historical agriculture practices, combined with leakage and aquifer recharge through the vadose zone. The complexity of processes occurring into such an environment leads to take into account a lot of knowledge on agronomy, geochemistry and hydrogeology in order to understand, model and predict the spatiotemporal evolution of nitrate content and provide a decision support tool for the water producers and stakeholders. To succeed in this challenge, conceptual and numerical models representing accurately the Chalk aquifer specificity need to be developed. A multidisciplinary approach is developed to simulate storage and transport from the ground surface until groundwater. This involves a new agronomic module "NITRATE" (NItrogen TRansfer for Arable soil to groundwaTEr), a soil-crop model allowing to calculate nitrogen mass balance in arable soil, and the "PHREEQC" numerical code for geochemical calculations, both coupled with the 3D transient groundwater numerical code "MARTHE". Otherwise, new development achieved on MARTHE code allows the use of dual porosity and permeability calculations needed in the fissured Chalk aquifer context. This method concerning the integration of existing multi-disciplinary tools is a real challenge to reduce the number of parameters by selecting the relevant equations and simplifying the equations without altering the signal. The robustness and the validity of these numerical developments are tested step by step with several simulations constrained by climate forcing, land use and nitrogen inputs over several decades. In the first time, simulations are performed in a 1D vertical unsaturated soil column for representing experimental nitrates

  6. Neighboring block based disparity vector derivation for multiview compatible 3D-AVC

    NASA Astrophysics Data System (ADS)

    Kang, Jewon; Chen, Ying; Zhang, Li; Zhao, Xin; Karczewicz, Marta

    2013-09-01

    3D-AVC being developed under Joint Collaborative Team on 3D Video Coding (JCT-3V) significantly outperforms the Multiview Video Coding plus Depth (MVC+D) which simultaneously encodes texture views and depth views with the multiview extension of H.264/AVC (MVC). However, when the 3D-AVC is configured to support multiview compatibility in which texture views are decoded without depth information, the coding performance becomes significantly degraded. The reason is that advanced coding tools incorporated into the 3D-AVC do not perform well due to the lack of a disparity vector converted from the depth information. In this paper, we propose a disparity vector derivation method utilizing only the information of texture views. Motion information of neighboring blocks is used to determine a disparity vector for a macroblock, so that the derived disparity vector is efficiently used for the coding tools in 3D-AVC. The proposed method significantly improves a coding gain of the 3D-AVC in the multiview compatible mode about 20% BD-rate saving in the coded views and 26% BD-rate saving in the synthesized views on average.

  7. Machine learning-based coding unit depth decisions for flexible complexity allocation in high efficiency video coding.

    PubMed

    Zhang, Yun; Kwong, Sam; Wang, Xu; Yuan, Hui; Pan, Zhaoqing; Xu, Long

    2015-07-01

    In this paper, we propose a machine learning-based fast coding unit (CU) depth decision method for High Efficiency Video Coding (HEVC), which optimizes the complexity allocation at CU level with given rate-distortion (RD) cost constraints. First, we analyze quad-tree CU depth decision process in HEVC and model it as a three-level of hierarchical binary decision problem. Second, a flexible CU depth decision structure is presented, which allows the performances of each CU depth decision be smoothly transferred between the coding complexity and RD performance. Then, a three-output joint classifier consists of multiple binary classifiers with different parameters is designed to control the risk of false prediction. Finally, a sophisticated RD-complexity model is derived to determine the optimal parameters for the joint classifier, which is capable of minimizing the complexity in each CU depth at given RD degradation constraints. Comparative experiments over various sequences show that the proposed CU depth decision algorithm can reduce the computational complexity from 28.82% to 70.93%, and 51.45% on average when compared with the original HEVC test model. The Bjøntegaard delta peak signal-to-noise ratio and Bjøntegaard delta bit rate are -0.061 dB and 1.98% on average, which is negligible. The overall performance of the proposed algorithm outperforms those of the state-of-the-art schemes.

  8. Protection of HEVC Video Delivery in Vehicular Networks with RaptorQ Codes

    PubMed Central

    Martínez-Rach, Miguel; López, Otoniel; Malumbres, Manuel Pérez

    2014-01-01

    With future vehicles equipped with processing capability, storage, and communications, vehicular networks will become a reality. A vast number of applications will arise that will make use of this connectivity. Some of them will be based on video streaming. In this paper we focus on HEVC video coding standard streaming in vehicular networks and how it deals with packet losses with the aid of RaptorQ, a Forward Error Correction scheme. As vehicular networks are packet loss prone networks, protection mechanisms are necessary if we want to guarantee a minimum level of quality of experience to the final user. We have run simulations to evaluate which configurations fit better in this type of scenarios. PMID:25136675

  9. A modified prediction scheme of the H.264 multiview video coding to improve the decoder performance

    NASA Astrophysics Data System (ADS)

    Hamadan, Ayman M.; Aly, Hussein A.; Fouad, Mohamed M.; Dansereau, Richard M.

    2013-02-01

    In this paper, we present a modified inter-view prediction scheme for the multiview video coding (MVC).With more inter-view prediction, the number of reference frames required to decode a single view increase. Consequently, the data size of decoding a single view increases, thus impacting the decoder performance. In this paper, we propose an MVC scheme that requires less inter-view prediction than that of the MVC standard scheme. The proposed scheme is implemented and tested on real multiview video sequences. Improvements are shown using the proposed scheme in terms of average data size required either to decode a single view, or to access any frame (i.e., random access), with comparable rate-distortion. It is compared to the MVC standard scheme and another improved techniques from the literature.

  10. Real-time video coding under power constraint based on H.264 codec

    NASA Astrophysics Data System (ADS)

    Su, Li; Lu, Yan; Wu, Feng; Li, Shipeng; Gao, Wen

    2007-01-01

    In this paper, we propose a joint power-distortion optimization scheme for real-time H.264 video encoding under the power constraint. Firstly, the power constraint is translated to the complexity constraint based on DVS technology. Secondly, a computation allocation model (CAM) with virtual buffers is proposed to facilitate the optimal allocation of constrained computational resource for each frame. Thirdly, the complexity adjustable encoder based on optimal motion estimation and mode decision is proposed to meet the allocated resource. The proposed scheme takes the advantage of some new features of H.264/AVC video coding tools such as early termination strategy in fast ME. Moreover, it can avoid suffering from the high overhead of the parametric power control algorithms and achieve fine complexity scalability in a wide range with stable rate-distortion performance. The proposed scheme also shows the potential of a further reduction of computation and power consumption in the decoding without any change on the existing decoders.

  11. Fast bi-directional prediction selection in H.264/MPEG-4 AVC temporal scalable video coding.

    PubMed

    Lin, Hung-Chih; Hang, Hsueh-Ming; Peng, Wen-Hsiao

    2011-12-01

    In this paper, we propose a fast algorithm that efficiently selects the temporal prediction type for the dyadic hierarchical-B prediction structure in the H.264/MPEG-4 temporal scalable video coding (SVC). We make use of the strong correlations in prediction type inheritance to eliminate the superfluous computations for the bi-directional (BI) prediction in the finer partitions, 16×8/8×16/8×8 , by referring to the best temporal prediction type of 16 × 16. In addition, we carefully examine the relationship in motion bit-rate costs and distortions between the BI and the uni-directional temporal prediction types. As a result, we construct a set of adaptive thresholds to remove the unnecessary BI calculations. Moreover, for the block partitions smaller than 8 × 8, either the forward prediction (FW) or the backward prediction (BW) is skipped based upon the information of their 8 × 8 partitions. Hence, the proposed schemes can efficiently reduce the extensive computational burden in calculating the BI prediction. As compared to the JSVM 9.11 software, our method saves the encoding time from 48% to 67% for a large variety of test videos over a wide range of coding bit-rates and has only a minor coding performance loss.

  12. Robust pedestrian tracking and recognition from FLIR video: a unified approach via sparse coding.

    PubMed

    Li, Xin; Guo, Rui; Chen, Chao

    2014-06-24

    Sparse coding is an emerging method that has been successfully applied to both robust object tracking and recognition in the vision literature. In this paper, we propose to explore a sparse coding-based approach toward joint object tracking-and-recognition and explore its potential in the analysis of forward-looking infrared (FLIR) video to support nighttime machine vision systems. A key technical contribution of this work is to unify existing sparse coding-based approaches toward tracking and recognition under the same framework, so that they can benefit from each other in a closed-loop. On the one hand, tracking the same object through temporal frames allows us to achieve improved recognition performance through dynamical updating of template/dictionary and combining multiple recognition results; on the other hand, the recognition of individual objects facilitates the tracking of multiple objects (i.e., walking pedestrians), especially in the presence of occlusion within a crowded environment. We report experimental results on both the CASIAPedestrian Database and our own collected FLIR video database to demonstrate the effectiveness of the proposed joint tracking-and-recognition approach.

  13. Robust Pedestrian Tracking and Recognition from FLIR Video: A Unified Approach via Sparse Coding

    PubMed Central

    Li, Xin; Guo, Rui; Chen, Chao

    2014-01-01

    Sparse coding is an emerging method that has been successfully applied to both robust object tracking and recognition in the vision literature. In this paper, we propose to explore a sparse coding-based approach toward joint object tracking-and-recognition and explore its potential in the analysis of forward-looking infrared (FLIR) video to support nighttime machine vision systems. A key technical contribution of this work is to unify existing sparse coding-based approaches toward tracking and recognition under the same framework, so that they can benefit from each other in a closed-loop. On the one hand, tracking the same object through temporal frames allows us to achieve improved recognition performance through dynamical updating of template/dictionary and combining multiple recognition results; on the other hand, the recognition of individual objects facilitates the tracking of multiple objects (i.e., walking pedestrians), especially in the presence of occlusion within a crowded environment. We report experimental results on both the CASIAPedestrian Database and our own collected FLIR video database to demonstrate the effectiveness of the proposed joint tracking-and-recognition approach. PMID:24961216

  14. Minimum distortion quantizer for fixed-rate 64-subband video coding

    NASA Astrophysics Data System (ADS)

    Alparone, Luciano; Andreadis, Alessandro; Argenti, Fabrizio; Benelli, Giuliano; Garzelli, Andrea; Tarchi, A.

    1995-02-01

    A motion-compensated sub-band coding (SBC) scheme for video signals, featuring fixed-rate and optimum quantizer, is presented. Block matching algorithm provides a suitable inter-frame prediction, and a 64 sub-band decomposition allows a high decorrelation of the motion- compensated difference field. The main drawback is that sub-bands containing sparse data of different statistics are produced, thus requiring run-length (RL) and variable length coding (VLC) for best performance. However, most digital communication channels operate at constant bit-rate (BR); hence, fixed-rate video coding is the main goal, in order to reduce buffering delays. The approach followed in this work is modeling the subbands as independent memoryless sources with generalized Gaussian PDFs and designing optimum uniform quantizers with the goal of minimizing distortion after a BR value, also accounting for the entropy of the RLs of zero/nonzero coefficients, has been specified. The problem is stated in terms of entropy allocation among sub-bands minimizing the overall distortion, analogously to optimal distortion allocation when fixed quality is requested. The constrained minimum is found by means of Lagrange multipliers, once the parametric PDFs have been assessed from true TV sequences. This procedure provides the optimum step for uniform quantization of each sub-band, thus leading to discarding some of the least significant ones.

  15. Evaluation of the scale dependent dynamic SGS model in the open source code caffa3d.MBRi in wall-bounded flows

    NASA Astrophysics Data System (ADS)

    Draper, Martin; Usera, Gabriel

    2015-04-01

    The Scale Dependent Dynamic Model (SDDM) has been widely validated in large-eddy simulations using pseudo-spectral codes [1][2][3]. The scale dependency, particularly the potential law, has been proved also in a priori studies [4][5]. To the authors' knowledge there have been only few attempts to use the SDDM in finite difference (FD) and finite volume (FV) codes [6][7], finding some improvements with the dynamic procedures (scale independent or scale dependent approach), but not showing the behavior of the scale-dependence parameter when using the SDDM. The aim of the present paper is to evaluate the SDDM in the open source code caffa3d.MBRi, an updated version of the code presented in [8]. caffa3d.MBRi is a FV code, second-order accurate, parallelized with MPI, in which the domain is divided in unstructured blocks of structured grids. To accomplish this, 2 cases are considered: flow between flat plates and flow over a rough surface with the presence of a model wind turbine, taking for this case the experimental data presented in [9]. In both cases the standard Smagorinsky Model (SM), the Scale Independent Dynamic Model (SIDM) and the SDDM are tested. As presented in [6][7] slight improvements are obtained with the SDDM. Nevertheless, the behavior of the scale-dependence parameter supports the generalization of the dynamic procedure proposed in the SDDM, particularly taking into account that no explicit filter is used (the implicit filter is unknown). [1] F. Porté-Agel, C. Meneveau, M.B. Parlange. "A scale-dependent dynamic model for large-eddy simulation: application to a neutral atmospheric boundary layer". Journal of Fluid Mechanics, 2000, 415, 261-284. [2] E. Bou-Zeid, C. Meneveau, M. Parlante. "A scale-dependent Lagrangian dynamic model for large eddy simulation of complex turbulent flows". Physics of Fluids, 2005, 17, 025105 (18p). [3] R. Stoll, F. Porté-Agel. "Dynamic subgrid-scale models for momentum and scalar fluxes in large-eddy simulations of

  16. MAP3D: a media processor approach for high-end 3D graphics

    NASA Astrophysics Data System (ADS)

    Darsa, Lucia; Stadnicki, Steven; Basoglu, Chris

    1999-12-01

    Equator Technologies, Inc. has used a software-first approach to produce several programmable and advanced VLIW processor architectures that have the flexibility to run both traditional systems tasks and an array of media-rich applications. For example, Equator's MAP1000A is the world's fastest single-chip programmable signal and image processor targeted for digital consumer and office automation markets. The Equator MAP3D is a proposal for the architecture of the next generation of the Equator MAP family. The MAP3D is designed to achieve high-end 3D performance and a variety of customizable special effects by combining special graphics features with high performance floating-point and media processor architecture. As a programmable media processor, it offers the advantages of a completely configurable 3D pipeline--allowing developers to experiment with different algorithms and to tailor their pipeline to achieve the highest performance for a particular application. With the support of Equator's advanced C compiler and toolkit, MAP3D programs can be written in a high-level language. This allows the compiler to successfully find and exploit any parallelism in a programmer's code, thus decreasing the time to market of a given applications. The ability to run an operating system makes it possible to run concurrent applications in the MAP3D chip, such as video decoding while executing the 3D pipelines, so that integration of applications is easily achieved--using real-time decoded imagery for texturing 3D objects, for instance. This novel architecture enables an affordable, integrated solution for high performance 3D graphics.

  17. Robust video communication by combining scalability and multiple description coding techniques

    NASA Astrophysics Data System (ADS)

    Wang, Huisheng; Ortega, Antonio

    2003-05-01

    Layered coding (LC) and multiple description coding (MDC) have been proposed as two different kinds of 'quality adaptation' schemes for video delivery over the current Internet or wireless networks. To combine the advantages of LC and MDC, we present a new approach -- Multiple Description Layered Coding (MDLC), to provide reliable video communication over a wider range of network scenarios and application requirements. MDLC improves LC in that it introduces redundancy in each layer so that the chance of receiving at least one description of base layer is greatly enhanced. Though LC and MDC are each good in limit cases (e.g., long end-to-end delay for LC vs. short delay for MDC), the proposed MDLC system can address intermediate cases as well. Same as a LC system with retransmission, the MDLC system can have a feedback channel to indicate which descriptions have been correctly received. Thus a low redundancy MDLC system can be implemented with our proposed runtime packet scheduling system based on the feedback information. The goal of our scheduling algorithm is to find a proper on-line packet scheduling policy to maximize the playback quality at the decoder. Previous work on scheduling algorithms has not considered multiple decoding choices due to the redundancy between data units, because of the increase in complexity involved in considering alternate decoding paths. In this paper, we introduce a new model of Directed Acyclic HyperGraph (DAHG) to represent the data dependencies among frames and layers, as well as the data correlation between descriptions. The impact of each data unit to others is represented by messages passing along the graph with updates based on new information received. Experimental results show that the proposed system provides more robust and efficient video communication for real-time applications over lossy packet networks.

  18. Side information and noise learning for distributed video coding using optical flow and clustering.

    PubMed

    Luong, Huynh Van; Rakêt, Lars Lau; Huang, Xin; Forchhammer, Søren

    2012-12-01

    Distributed video coding (DVC) is a coding paradigm that exploits the source statistics at the decoder side to reduce the complexity at the encoder. The coding efficiency of DVC critically depends on the quality of side information generation and accuracy of noise modeling. This paper considers transform domain Wyner-Ziv (TDWZ) coding and proposes using optical flow to improve side information generation and clustering to improve the noise modeling. The optical flow technique is exploited at the decoder side to compensate for weaknesses of block-based methods, when using motion-compensation to generate side information frames. Clustering is introduced to capture cross band correlation and increase local adaptivity in the noise modeling. This paper also proposes techniques to learn from previously decoded WZ frames. Different techniques are combined by calculating a number of candidate soft side information for low density parity check accumulate decoding. The proposed decoder side techniques for side information and noise learning (SING) are integrated in a TDWZ scheme. On test sequences, the proposed SING codec robustly improves the coding efficiency of TDWZ DVC. For WZ frames using a GOP size of 2, up to 4-dB improvement or an average (Bjøntegaard) bit-rate savings of 37% is achieved compared with DISCOVER.

  19. 3D Visualization of Machine Learning Algorithms with Astronomical Data

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2016-01-01

    We present innovative machine learning (ML) methods using unsupervised clustering with minimum spanning trees (MSTs) to study 3D astronomical catalogs. Utilizing Python code to build trees based on galaxy catalogs, we can render the results with the visualization suite Blender to produce interactive 360 degree panoramic videos. The catalogs and their ML results can be explored in a 3D space using mobile devices, tablets or desktop browsers. We compare the statistics of the MST results to a number of machine learning methods relating to optimization and efficiency.

  20. Unequal error protection codes for wavelet video transmission over W-CDMA, AWGN, and Rayleigh fading channels

    NASA Astrophysics Data System (ADS)

    Le, Minh Hung; Liyana-Pathirana, Ranjith

    2003-06-01

    The unequal error protection (UEP) codes with wavelet-based algorithm for video compression over wide-band code division multiple access (W-CDMA), additive white Gaussian noise (AWGN) and Rayleigh fading channels are analysed. The utilization of Wavelets has come out to be a powerful method for compress video sequence. The wavelet transform compression technique has shown to be more appropriate to high quality video applications, producing better quality output for the compressed frames of video. A spatially scalable video coding framework of MPEG2 in which motion correspondences between successive video frames are exploited in the wavelet transform domain. The basic motivation for our coder is that motion fields are typically smooth that can be efficiently captured through a multiresolutional framework. Wavelet decomposition is applied to video frames and the coefficients at each level are predicted from the coarser level through backward motion compensation. The proposed algorithms of the embedded zero-tree wavelet (EZW) coder and the 2-D wavelet packet transform (2-D WPT) are investigated.

  1. H.264/AVC intra-only coding (iAVC) techniques for video over wireless networks

    NASA Astrophysics Data System (ADS)

    Yang, Ming; Trifas, Monica; Xiong, Guolun; Rogers, Joshua

    2009-02-01

    The requirement to transmit video data over unreliable wireless networks (with the possibility of packet loss) is anticipated in the foreseeable future. Significant compression ratio and error resilience are both needed for complex applications including tele-operated robotics, vehicle-mounted cameras, sensor network, etc. Block-matching based inter-frame coding techniques, including MPEG-4 and H.264/AVC, do not perform well in these scenarios due to error propagation between frames. Many wireless applications often use intra-only coding technologies such as Motion-JPEG, which exhibit better recovery from network data loss at the price of higher data rates. In order to address these research issues, an intra-only coding scheme of H.264/AVC (iAVC) is proposed. In this approach, each frame is coded independently as an I-frame. Frame copy is applied to compensate for packet loss. This approach is a good balance between compression performance and error resilience. It achieves compression performance comparable to Motion- JPEG2000 (MJ2), with lower complexity. Error resilience similar to Motion-JPEG (MJ) will also be accomplished. Since the intra-frame prediction with iAVC is strictly confined within the range of a slice, memory usage is also extremely low. Low computational complexity and memory usage are very crucial to mobile stations and devices in wireless network.

  2. 3D printing of soft and wet systems benefit from hard-to-soft transition of transparent shape memory gels (presentation video)

    NASA Astrophysics Data System (ADS)

    Furukawa, Hidemitsu; Gong, Jin; Makino, Masato; Kabir, Md. Hasnat

    2014-04-01

    Recently we successfully developed novel transparent shape memory gels. The SMG memorize their original shapes during the gelation process. In the room temperature, the SMG are elastic and show plasticity (yielding) under deformation. However when heated above about 50˚C, the SMG induce hard-to-soft transition and go back to their original shapes automatically. We focus on new soft and wet systems made of the SMG by 3-D printing technology.

  3. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    PubMed Central

    Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862

  4. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    PubMed

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  5. Numerical simulations of the ablative Rayleigh-Taylor instability in planar inertial-confinement-fusion targets using the FastRad3D code

    NASA Astrophysics Data System (ADS)

    Bates, J. W.; Schmitt, A. J.; Karasik, M.; Zalesak, S. T.

    2016-12-01

    The ablative Rayleigh-Taylor (RT) instability is a central issue in the performance of laser-accelerated inertial-confinement-fusion targets. Historically, the accurate numerical simulation of this instability has been a challenging task for many radiation hydrodynamics codes, particularly when it comes to capturing the ablatively stabilized region of the linear dispersion spectrum and modeling ab initio perturbations. Here, we present recent results from two-dimensional numerical simulations of the ablative RT instability in planar laser-ablated foils that were performed using the Eulerian code FastRad3D. Our study considers polystyrene, (cryogenic) deuterium-tritium, and beryllium target materials, quarter- and third-micron laser light, and low and high laser intensities. An initial single-mode surface perturbation is modeled in our simulations as a small modulation to the target mass density and the ablative RT growth-rate is calculated from the time history of areal-mass variations once the target reaches a steady-state acceleration. By performing a sequence of such simulations with different perturbation wavelengths, we generate a discrete dispersion spectrum for each of our examples and find that in all cases the linear RT growth-rate γ is well described by an expression of the form γ = α [ k g / ( 1 + ɛ k L m ) ] 1 / 2 - β k V a , where k is the perturbation wavenumber, g is the acceleration of the target, Lm is the minimum density scale-length, Va is the ablation velocity, and ɛ is either one or zero. The dimensionless coefficients α and β in the above formula depend on the particular target and laser parameters and are determined from two-dimensional simulation results through the use of a nonlinear curve-fitting procedure. While our findings are generally consistent with those of Betti et al. (Phys. Plasmas 5, 1446 (1998)), the ablative RT growth-rates predicted in this investigation are somewhat smaller than the values previously reported for the

  6. Introduction to study and simulation of low rate video coding schemes

    NASA Technical Reports Server (NTRS)

    1992-01-01

    During this period, the development of simulators for the various HDTV systems proposed to the FCC were developed. These simulators will be tested using test sequences from the MPEG committee. The results will be extrapolated to HDTV video sequences. Currently, the simulator for the compression aspects of the Advanced Digital Television (ADTV) was completed. Other HDTV proposals are at various stages of development. A brief overview of the ADTV system is given. Some coding results obtained using the simulator are discussed. These results are compared to those obtained using the CCITT H.261 standard. These results in the context of the CCSDS specifications are evaluated and some suggestions as to how the ADTV system could be implemented in the NASA network are made.

  7. Unbalanced Multiple-Description Video Coding with Rate-Distortion Optimization

    NASA Astrophysics Data System (ADS)

    Comas, David; Singh, Raghavendra; Ortega, Antonio; Marqués, Ferran

    2003-12-01

    We propose to use multiple-description coding (MDC) to protect video information against packet losses and delay, while also ensuring that it can be decoded using a standard decoder. Video data are encoded into a high-resolution stream using a standard compliant encoder. In addition, a low-resolution stream is generated by duplicating the relevant information (motion vectors, headers and some of the DCT coefficient) from the high-resolution stream while the remaining coefficients are set to zero. Both streams are independently decodable by a standard decoder. However, only in case of losses in the high resolution description, the corresponding information from the low resolution stream is decoded, else the received high resolution description is decoded. The main contribution of this paper is an optimization algorithm which, given the loss ratio, allocates bits to both descriptions and selects the right number of coefficients to duplicate in the low-resolution stream so as to minimize the expected distortion at the decoder end.

  8. Real-time distributed video coding for 1K-pixel visual sensor networks

    NASA Astrophysics Data System (ADS)

    Hanca, Jan; Deligiannis, Nikos; Munteanu, Adrian

    2016-07-01

    Many applications in visual sensor networks (VSNs) demand the low-cost wireless transmission of video data. In this context, distributed video coding (DVC) has proven its potential to achieve state-of-the-art compression performance while maintaining low computational complexity of the encoder. Despite their proven capabilities, current DVC solutions overlook hardware constraints, and this renders them unsuitable for practical implementations. This paper introduces a DVC architecture that offers highly efficient wireless communication in real-world VSNs. The design takes into account the severe computational and memory constraints imposed by practical implementations on low-resolution visual sensors. We study performance-complexity trade-offs for feedback-channel removal, propose learning-based techniques for rate allocation, and investigate various simplifications of side information generation yielding real-time decoding. The proposed system is evaluated against H.264/AVC intra, Motion-JPEG, and our previously designed DVC prototype for low-resolution visual sensors. Extensive experimental results on various data show significant improvements in multiple configurations. The proposed encoder achieves real-time performance on a 1k-pixel visual sensor mote. Real-time decoding is performed on a Raspberry Pi single-board computer or a low-end notebook PC. To the best of our knowledge, the proposed codec is the first practical DVC deployment on low-resolution VSNs.

  9. Moving objects extraction method in H.264/advanced video coding bit stream of a complex scene

    NASA Astrophysics Data System (ADS)

    Mingsheng, Chen; Mingxin, Qin; Guangming, Liang; Jixiang, Sun; Xu, Ning

    2013-08-01

    For the purpose of extracting moving objects from H.264/advanced video coding (AVC) bit stream of a complex scene, an algorithm based on maximum a posteriori Markov random field (MRF) framework to extract moving objects directly from H.264 compressed video is proposed in this paper. It mainly involves encoding information of motion vectors (MVs) and block partition modes in H.264/AVC bit stream and utilizes temporal continuity and spatial consistency of moving object's pieces. First, it retrieves MVs and block partition modes of identical 4×4 pixel blocks in P frames and establishes Gaussian mixture model (GMM) of the phase of MVs as a reference background, and then creates MRF model based on MVs, block partition modes, the GMM of the background, spatial, and temporal consistency. The moving objects are retrieved by solving the MRF model. The experimental results show that it can perform robustly in a complex environment and the precision and recall have been improved over the existing algorithm.

  10. [3D emulation of epicardium dynamic mapping].

    PubMed

    Lu, Jun; Yang, Cui-Wei; Fang, Zu-Xiang

    2005-03-01

    In order to realize epicardium dynamic mapping of the whole atria, 3-D graphics are drawn with OpenGL. Some source codes are introduced in the paper to explain how to produce, read, and manipulate 3-D model data.

  11. Spherical 3D isotropic wavelets

    NASA Astrophysics Data System (ADS)

    Lanusse, F.; Rassat, A.; Starck, J.-L.

    2012-04-01

    Context. Future cosmological surveys will provide 3D large scale structure maps with large sky coverage, for which a 3D spherical Fourier-Bessel (SFB) analysis in spherical coordinates is natural. Wavelets are particularly well-suited to the analysis and denoising of cosmological data, but a spherical 3D isotropic wavelet transform does not currently exist to analyse spherical 3D data. Aims: The aim of this paper is to present a new formalism for a spherical 3D isotropic wavelet, i.e. one based on the SFB decomposition of a 3D field and accompany the formalism with a public code to perform wavelet transforms. Methods: We describe a new 3D isotropic spherical wavelet decomposition based on the undecimated wavelet transform (UWT) described in Starck et al. (2006). We also present a new fast discrete spherical Fourier-Bessel transform (DSFBT) based on both a discrete Bessel transform and the HEALPIX angular pixelisation scheme. We test the 3D wavelet transform and as a toy-application, apply a denoising algorithm in wavelet space to the Virgo large box cosmological simulations and find we can successfully remove noise without much loss to the large scale structure. Results: We have described a new spherical 3D isotropic wavelet transform, ideally suited to analyse and denoise future 3D spherical cosmological surveys, which uses a novel DSFBT. We illustrate its potential use for denoising using a toy model. All the algorithms presented in this paper are available for download as a public code called MRS3D at http://jstarck.free.fr/mrs3d.html

  12. Context-adaptive based CU processing for 3D-HEVC

    PubMed Central

    Shen, Liquan; An, Ping; Liu, Zhi

    2017-01-01

    The 3D High Efficiency Video Coding (3D-HEVC) standard aims to code 3D videos that usually contain multi-view texture videos and its corresponding depth information. It inherits the same quadtree prediction structure of HEVC to code both texture videos and depth maps. Each coding unit (CU) allows recursively splitting into four equal sub-CUs. At each CU depth level, it enables 10 types of inter modes and 35 types of intra modes in inter frames. Furthermore, the inter-view prediction tools are applied to each view in the test model of 3D-HEVC (HTM), which uses variable size disparity-compensated prediction to exploit inter-view correlation within neighbor views. It also exploits redundancies between a texture video and its associated depth using inter-component coding tools. These achieve the highest coding efficiency to code 3D videos but require a very high computational complexity. In this paper, we propose a context-adaptive based fast CU processing algorithm to jointly optimize the most complex components of HTM including CU depth level decision, mode decision, motion estimation (ME) and disparity estimation (DE) processes. It is based on the hypothesis that the optimal CU depth level, prediction mode and motion vector of a CU are correlated with those from spatiotemporal, inter-view and inter-component neighboring CUs. We analyze the video content based on coding information from neighboring CUs and early predict each CU into one of five categories i.e., DE-omitted CU, ME-DE-omitted CU, SPLIT CU, Non-SPLIT CU and normal CU, and then each type of CU adaptively adopts different processing strategies. Experimental results show that the proposed algorithm saves 70% encoder runtime on average with only a 0.1% BD-rate increase on coded views and 0.8% BD-rate increase on synthesized views. Our algorithm outperforms the state-of-the-art algorithms in terms of coding time saving or with better RD performance. PMID:28182719

  13. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  14. Visual saliency-based fast intracoding algorithm for high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Zhou, Xin; Shi, Guangming; Zhou, Wei; Duan, Zhemin

    2017-01-01

    Intraprediction has been significantly improved in high efficiency video coding over H.264/AVC with quad-tree-based coding unit (CU) structure from size 64×64 to 8×8 and more prediction modes. However, these techniques cause a dramatic increase in computational complexity. An intracoding algorithm is proposed that consists of perceptual fast CU size decision algorithm and fast intraprediction mode decision algorithm. First, based on the visual saliency detection, an adaptive and fast CU size decision method is proposed to alleviate intraencoding complexity. Furthermore, a fast intraprediction mode decision algorithm with step halving rough mode decision method and early modes pruning algorithm is presented to selectively check the potential modes and effectively reduce the complexity of computation. Experimental results show that our proposed fast method reduces the computational complexity of the current HM to about 57% in encoding time with only 0.37% increases in BD rate. Meanwhile, the proposed fast algorithm has reasonable peak signal-to-noise ratio losses and nearly the same subjective perceptual quality.

  15. Video Traffic Characteristics of Modern Encoding Standards: H.264/AVC with SVC and MVC Extensions and H.265/HEVC

    PubMed Central

    2014-01-01

    Video encoding for multimedia services over communication networks has significantly advanced in recent years with the development of the highly efficient and flexible H.264/AVC video coding standard and its SVC extension. The emerging H.265/HEVC video coding standard as well as 3D video coding further advance video coding for multimedia communications. This paper first gives an overview of these new video coding standards and then examines their implications for multimedia communications by studying the traffic characteristics of long videos encoded with the new coding standards. We review video coding advances from MPEG-2 and MPEG-4 Part 2 to H.264/AVC and its SVC and MVC extensions as well as H.265/HEVC. For single-layer (nonscalable) video, we compare H.265/HEVC and H.264/AVC in terms of video traffic and statistical multiplexing characteristics. Our study is the first to examine the H.265/HEVC traffic variability for long videos. We also illustrate the video traffic characteristics and statistical multiplexing of scalable video encoded with the SVC extension of H.264/AVC as well as 3D video encoded with the MVC extension of H.264/AVC. PMID:24701145

  16. Video traffic characteristics of modern encoding standards: H.264/AVC with SVC and MVC extensions and H.265/HEVC.

    PubMed

    Seeling, Patrick; Reisslein, Martin

    2014-01-01

    Video encoding for multimedia services over communication networks has significantly advanced in recent years with the development of the highly efficient and flexible H.264/AVC video coding standard and its SVC extension. The emerging H.265/HEVC video coding standard as well as 3D video coding further advance video coding for multimedia communications. This paper first gives an overview of these new video coding standards and then examines their implications for multimedia communications by studying the traffic characteristics of long videos encoded with the new coding standards. We review video coding advances from MPEG-2 and MPEG-4 Part 2 to H.264/AVC and its SVC and MVC extensions as well as H.265/HEVC. For single-layer (nonscalable) video, we compare H.265/HEVC and H.264/AVC in terms of video traffic and statistical multiplexing characteristics. Our study is the first to examine the H.265/HEVC traffic variability for long videos. We also illustrate the video traffic characteristics and statistical multiplexing of scalable video encoded with the SVC extension of H.264/AVC as well as 3D video encoded with the MVC extension of H.264/AVC.

  17. Pixel-level Matching Based Multi-hypothesis Error Concealment Modes for Wireless 3D H.264/MVC Communication

    NASA Astrophysics Data System (ADS)

    El-Shafai, Walid

    2015-09-01

    3D multi-view video (MVV) is multiple video streams shot by several cameras around a single scene simultaneously. Therefore it is an urgent task to achieve high 3D MVV compression to meet future bandwidth constraints while maintaining a high reception quality. 3D MVV coded bit-streams that are transmitted over wireless network can suffer from error propagation in the space, time and view domains. Error concealment (EC) algorithms have the advantage of improving the received 3D video quality without any modifications in the transmission rate or in the encoder hardware or software. To improve the quality of reconstructed 3D MVV, we propose an efficient adaptive EC algorithm with multi-hypothesis modes to conceal the erroneous Macro-Blocks (MBs) of intra-coded and inter-coded frames by exploiting the spatial, temporal and inter-view correlations between frames and views. Our proposed algorithm adapts to 3D MVV motion features and to the error locations. The lost MBs are optimally recovered by utilizing motion and disparity matching between frames and views on pixel-by-pixel matching basis. Our simulation results show that the proposed adaptive multi-hypothesis EC algorithm can significantly improve the objective and subjective 3D MVV quality.

  18. A study on H and O-H grid generation and associated flow codes for gas turbine 3D Navier Stokes analyses

    NASA Astrophysics Data System (ADS)

    Choi, D.; Knight, C. J.

    1991-06-01

    A method to generate H and O-H grid systems for 3D gas turbine geometries has been developed. It is a simple procedure which solves a set of elliptic equations starting from an initial grid system generated algebraically. This grid generation procedure is for 3D Navier-Stokes analysis based on the scalar or diagonalized form of approximate factorization. The grids generated by this procedure have been applied to 3D heat transfer calculations and compared with experimental results. Detailed comparisons are given for both H and O-H grid topologies, considering the Low Aspect Ratio Turbine (LART) and using a two-equation turbulence model with viscous sublayer resolution.

  19. Implementation of scalable video coding deblocking filter from high-level SystemC description

    NASA Astrophysics Data System (ADS)

    Carballo, Pedro P.; Espino, Omar; Neris, Romén.; Hernández-Fernández, Pedro; Szydzik, Tomasz M.; Núñez, Antonio

    2013-05-01

    This paper describes key concepts in the design and implementation of a deblocking filter (DF) for a H.264/SVC video decoder. The DF supports QCIF and CIF video formats with temporal and spatial scalability. The design flow starts from a SystemC functional model and has been refined using high-level synthesis methodology to RTL microarchitecture. The process is guided with performance measurements (latency, cycle time, power, resource utilization) with the objective of assuring the quality of results of the final system. The functional model of the DF is created in an incremental way from the AVC DF model using OpenSVC source code as reference. The design flow continues with the logic synthesis and the implementation on the FPGA using various strategies. The final implementation is chosen among the implementations that meet the timing constraints. The DF is capable to run at 100 MHz, and macroblocks are processed in 6,500 clock cycles for a throughput of 130 fps for QCIF format and 37 fps for CIF format. The proposed architecture for the complete H.264/SVC decoder is composed of an OMAP 3530 SOC (ARM Cortex-A8 GPP + DSP) and the FPGA Virtex-5 acting as a coprocessor for DF implementation. The DF is connected to the OMAP SOC using the GPMC interface. A validation platform has been developed using the embedded PowerPC processor in the FPGA, composing a SoC that integrates the frame generation and visualization in a TFT screen. The FPGA implements both the DF core and a GPMC slave core. Both cores are connected to the PowerPC440 embedded processor using LocalLink interfaces. The FPGA also contains a local memory capable of storing information necessary to filter a complete frame and to store a decoded picture frame. The complete system is implemented in a Virtex5 FX70T device.

  20. Fast Mode Decision in the HEVC Video Coding Standard by Exploiting Region with Dominated Motion and Saliency Features

    PubMed Central

    Podder, Pallab Kanti; Paul, Manoranjan; Murshed, Manzur

    2016-01-01

    The emerging High Efficiency Video Coding (HEVC) standard introduces a number of innovative and powerful coding tools to acquire better compression efficiency compared to its predecessor H.264. The encoding time complexities have also increased multiple times that is not suitable for realtime video coding applications. To address this limitation, this paper employs a novel coding strategy to reduce the time complexity in HEVC encoder by efficient selection of appropriate block-partitioning modes based on human visual features (HVF). The HVF in the proposed technique comprise with human visual attention modelling-based saliency feature and phase correlation-based motion features. The features are innovatively combined through a fusion process by developing a content-based adaptive weighted cost function to determine the region with dominated motion/saliency (RDMS)- based binary pattern for the current block. The generated binary pattern is then compared with a codebook of predefined binary pattern templates aligned to the HEVC recommended block-paritioning to estimate a subset of inter-prediction modes. Without exhaustive exploration of all modes available in the HEVC standard, only the selected subset of modes are motion estimated and motion compensated for a particular coding unit. The experimental evaluation reveals that the proposed technique notably down-scales the average computational time of the latest HEVC reference encoder by 34% while providing similar rate-distortion (RD) performance for a wide range of video sequences. PMID:26963813

  1. On the efficiency of image completion methods for intra prediction in video coding with large block structures

    NASA Astrophysics Data System (ADS)

    Doshkov, Dimitar; Jottrand, Oscar; Wiegand, Thomas; Ndjiki-Nya, Patrick

    2013-02-01

    Intra prediction is a fundamental tool in video coding with hybrid block-based architecture. Recent investigations have shown that one of the most beneficial elements for a higher compression performance in high-resolution videos is the incorporation of larger block structures. Thus in this work, we investigate the performance of novel intra prediction modes based on different image completion techniques in a new video coding scheme with large block structures. Image completion methods exploit the fact that high frequency image regions yield high coding costs when using classical H.264/AVC prediction modes. This problem is tackled by investigating the incorporation of several intra predictors using the concept of Laplace partial differential equation (PDE), Least Square (LS) based linear prediction and the Auto Regressive model. A major aspect of this article is the evaluation of the coding performance in a qualitative (i.e. coding efficiency) manner. Experimental results show significant improvements in compression (up to 7.41 %) by integrating the LS-based linear intra prediction.

  2. Fast inter-mode decision algorithm for high-efficiency video coding based on similarity of coding unit segmentation and partition mode between two temporally adjacent frames

    NASA Astrophysics Data System (ADS)

    Zhong, Guo-Yun; He, Xiao-Hai; Qing, Lin-Bo; Li, Yuan

    2013-04-01

    High-efficiency video coding (HEVC) introduces a flexible hierarchy of three block structures: coding unit (CU), prediction unit (PU), and transform unit (TU), which have brought about higher coding efficiency than the current national video coding standard H.264/advanced video coding (AVC). HEVC, however, simultaneously requires higher computational complexity than H.264/AVC, although several fast inter-mode decisions were proposed in its development. To further reduce this complexity, a fast inter-mode decision algorithm is proposed based on temporal correlation. Because of the distinct difference of inter-prediction block between HEVC and H.264/AVC, in order to use the temporal correlation to speed up the inter prediction, the correlation of inter-prediction between two adjacent frames needs to be analyzed according to the structure of CU and PU in HEVC. The probabilities of all the partition modes in all sizes of CU and the similarity of CU segmentation and partition modes between two adjacent frames are tested. The correlation of partition modes between two CUs with different sizes in two adjacent frames is tested and analyzed. Based on the characteristics tested and analyzed, at most, two prior partition modes are evaluated for each level of CU, which reduces the number of rate distortion cost calculations. The simulation results show that the proposed algorithm further reduces coding time by 33.0% to 43.3%, with negligible loss in bitrate and peak signal-to-noise ratio, on the basis of the fast inter-mode decision algorithms in current HEVC reference software HM7.0.

  3. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2004-04-05

    This project consists of two activities. Task A, Simulations and Measurements, combines all the material model development and associated numerical work with the materials-oriented experimental activities. The goal of this effort is to provide an improved understanding of dynamic material properties and to provide accurate numerical representations of those properties for use in analysis codes. Task B, ALE3D Development, involves general development activities in the ALE3D code with the focus of improving simulation capabilities for problems of mutual interest to DoD and DOE. Emphasis is on problems involving multi-phase flow, blast loading of structures and system safety/vulnerability studies.

  4. Infrastructure for 3D Imaging Test Bed

    DTIC Science & Technology

    2007-05-11

    analysis. (c.) Real time detection & analysis of human gait: using a video camera we capture walking human silhouette for pattern modeling and gait ... analysis . Fig. 5 shows the scanning result result that is fed into a Geo-magic software tool for 3D meshing. Fig. 5: 3D scanning result In

  5. Bitstream decoding processor for fast entropy decoding of variable length coding-based multiformat videos

    NASA Astrophysics Data System (ADS)

    Jo, Hyunho; Sim, Donggyu

    2014-06-01

    We present a bitstream decoding processor for entropy decoding of variable length coding-based multiformat videos. Since most of the computational complexity of entropy decoders comes from bitstream accesses and table look-up process, the developed bitstream processing unit (BsPU) has several designated instructions to access bitstreams and to minimize branch operations in the table look-up process. In addition, the instruction for bitstream access has the capability to remove emulation prevention bytes (EPBs) of H.264/AVC without initial delay, repeated memory accesses, and additional buffer. Experimental results show that the proposed method for EPB removal achieves a speed-up of 1.23 times compared to the conventional EPB removal method. In addition, the BsPU achieves speed-ups of 5.6 and 3.5 times in entropy decoding of H.264/AVC and MPEG-4 Visual bitstreams, respectively, compared to an existing processor without designated instructions and a new table mapping algorithm. The BsPU is implemented on a Xilinx Virtex5 LX330 field-programmable gate array. The MPEG-4 Visual (ASP, Level 5) and H.264/AVC (Main Profile, Level 4) are processed using the developed BsPU with a core clock speed of under 250 MHz in real time.

  6. Stereoscopic Visual Attention-Based Regional Bit Allocation Optimization for Multiview Video Coding

    NASA Astrophysics Data System (ADS)

    Zhang, Yun; Jiang, Gangyi; Yu, Mei; Chen, Ken; Dai, Qionghai

    2010-12-01

    We propose a Stereoscopic Visual Attention- (SVA-) based regional bit allocation optimization for Multiview Video Coding (MVC) by the exploiting visual redundancies from human perceptions. We propose a novel SVA model, where multiple perceptual stimuli including depth, motion, intensity, color, and orientation contrast are utilized, to simulate the visual attention mechanisms of human visual system with stereoscopic perception. Then, a semantic region-of-interest (ROI) is extracted based on the saliency maps of SVA. Both objective and subjective evaluations of extracted ROIs indicated that the proposed SVA model based on ROI extraction scheme outperforms the schemes only using spatial or/and temporal visual attention clues. Finally, by using the extracted SVA-based ROIs, a regional bit allocation optimization scheme is presented to allocate more bits on SVA-based ROIs for high image quality and fewer bits on background regions for efficient compression purpose. Experimental results on MVC show that the proposed regional bit allocation algorithm can achieve over [InlineEquation not available: see fulltext.]% bit-rate saving while maintaining the subjective image quality. Meanwhile, the image quality of ROIs is improved by [InlineEquation not available: see fulltext.] dB at the cost of insensitive image quality degradation of the background image.

  7. Using game theory for perceptual tuned rate control algorithm in video coding

    NASA Astrophysics Data System (ADS)

    Luo, Jiancong; Ahmad, Ishfaq

    2005-03-01

    This paper proposes a game theoretical rate control technique for video compression. Using a cooperative gaming approach, which has been utilized in several branches of natural and social sciences because of its enormous potential for solving constrained optimization problems, we propose a dual-level scheme to optimize the perceptual quality while guaranteeing "fairness" in bit allocation among macroblocks. At the frame level, the algorithm allocates target bits to frames based on their coding complexity. At the macroblock level, the algorithm distributes bits to macroblocks by defining a bargaining game. Macroblocks play cooperatively to compete for shares of resources (bits) to optimize their quantization scales while considering the Human Visual System"s perceptual property. Since the whole frame is an entity perceived by viewers, macroblocks compete cooperatively under a global objective of achieving the best quality with the given bit constraint. The major advantage of the proposed approach is that the cooperative game leads to an optimal and fair bit allocation strategy based on the Nash Bargaining Solution. Another advantage is that it allows multi-objective optimization with multiple decision makers (macroblocks). The simulation results testify the algorithm"s ability to achieve accurate bit rate with good perceptual quality, and to maintain a stable buffer level.

  8. Time-dependent distribution functions and resulting synthetic NPA spectra in C-Mod calculated with the CQL3D-Hybrid-FOW, AORSA full-wave, and DC Lorentz codes

    NASA Astrophysics Data System (ADS)

    Harvey, R. W.; Petrov, Yu.; Jaeger, E. F.; Berry, L. A.; Bonoli, P. T.; Bader, A.

    2015-12-01

    A time-dependent simulation of C-Mod pulsed TCRF power is made obtaining minority hydrogen ion distributions with the CQL3D-Hybrid-FOW finite-orbit-width Fokker-Planck code. Cyclotron-resonant TCRF fields are calculated with the AORSA full wave code. The RF diffusion coefficients used in CQL3D are obtained with the DC Lorentz gyro-orbit code for perturbed particle trajectories in the combined equilibrium and TCRF electromagnetic fields. Prior results with a zero-banana-width simulation using the CQL3D/AORSA/DC time-cycles showed a pronounced enhancement of the H distribution in the perpendicular velocity direction compared to results obtained from Stix's quasilinear theory, and this substantially increased the rampup rate of the observed vertically-viewed neutral particle analyzer (NPA) flux, in general agreement with experiment. However, ramp down of the NPA flux after the pulse, remained long compared to the experiment. The present study compares the new FOW results, including relevant gyro-radius effects, to determine the importance of these new effects on the the NPA time-dependence.

  9. 3D field harmonics

    SciTech Connect

    Caspi, S.; Helm, M.; Laslett, L.J.

    1991-03-30

    We have developed an harmonic representation for the three dimensional field components within the windings of accelerator magnets. The form by which the field is presented is suitable for interfacing with other codes that make use of the 3D field components (particle tracking and stability). The field components can be calculated with high precision and reduced cup time at any location (r,{theta},z) inside the magnet bore. The same conductor geometry which is used to simulate line currents is also used in CAD with modifications more readily available. It is our hope that the format used here for magnetic fields can be used not only as a means of delivering fields but also as a way by which beam dynamics can suggest correction to the conductor geometry. 5 refs., 70 figs.

  10. MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.

    PubMed

    Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño

    2013-01-01

    In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.

  11. MATIN: A Random Network Coding Based Framework for High Quality Peer-to-Peer Live Video Streaming

    PubMed Central

    Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño

    2013-01-01

    In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay. PMID:23940530

  12. Speaking Volumes About 3-D

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  13. An Assessment of Some Design Constraints on Heat Production of a 3D Conceptual EGS Model Using an Open-Source Geothermal Reservoir Simulation Code

    SciTech Connect

    Yidong Xia; Mitch Plummer; Robert Podgorney; Ahmad Ghassemi

    2016-02-01

    Performance of heat production process over a 30-year period is assessed in a conceptual EGS model with a geothermal gradient of 65K per km depth in the reservoir. Water is circulated through a pair of parallel wells connected by a set of single large wing fractures. The results indicate that the desirable output electric power rate and lifespan could be obtained under suitable material properties and system parameters. A sensitivity analysis on some design constraints and operation parameters indicates that 1) the fracture horizontal spacing has profound effect on the long-term performance of heat production, 2) the downward deviation angle for the parallel doublet wells may help overcome the difficulty of vertical drilling to reach a favorable production temperature, and 3) the thermal energy production rate and lifespan has close dependence on water mass flow rate. The results also indicate that the heat production can be improved when the horizontal fracture spacing, well deviation angle, and production flow rate are under reasonable conditions. To conduct the reservoir modeling and simulations, an open-source, finite element based, fully implicit, fully coupled hydrothermal code, namely FALCON, has been developed and used in this work. Compared with most other existing codes that are either closed-source or commercially available in this area, this new open-source code has demonstrated a code development strategy that aims to provide an unparalleled easiness for user-customization and multi-physics coupling. Test results have shown that the FALCON code is able to complete the long-term tests efficiently and accurately, thanks to the state-of-the-art nonlinear and linear solver algorithms implemented in the code.

  14. GPM 3D Flyby Video of Lester

    NASA Video Gallery

    On Aug. 25, GPM found rain was falling at a rate of over 54 mm (2.1 inches) per hour in rain bands east of Lester's center. Cloud top heights were reaching about 12km (7.4 miles) in the tallest sto...

  15. Comparison of the 3-D Deterministic Neutron Transport Code Attila® To Measure Data, MCNP And MCNPX For The Advanced Test Reactor

    SciTech Connect

    D. Scott Lucas; D. S. Lucas

    2005-09-01

    An LDRD (Laboratory Directed Research and Development) project is underway at the Idaho National Laboratory (INL) to apply the three-dimensional multi-group deterministic neutron transport code (Attila®) to criticality, flux and depletion calculations of the Advanced Test Reactor (ATR). This paper discusses the development of Attila models for ATR, capabilities of Attila, the generation and use of different cross-section libraries, and comparisons to ATR data, MCNP, MCNPX and future applications.

  16. An overview of new video coding tools under consideration for VP10: the successor to VP9

    NASA Astrophysics Data System (ADS)

    Mukherjee, Debargha; Su, Hui; Bankoski, James; Converse, Alex; Han, Jingning; Liu, Zoe; Xu, Yaowu

    2015-09-01

    Google started an opensource project, entitled the WebM Project, in 2010 to develop royaltyfree video codecs for the web. The present generation codec developed in the WebM project called VP9 was finalized in mid2013 and is currently being served extensively by YouTube, resulting in billions of views per day. Even though adoption of VP9 outside Google is still in its infancy, the WebM project has already embarked on an ambitious project to develop a next edition codec VP10 that achieves at least a generational bitrate reduction over the current generation codec VP9. Although the project is still in early stages, a set of new experimental coding tools have already been added to baseline VP9 to achieve modest coding gains over a large enough test set. This paper provides a technical overview of these coding tools.

  17. 3d-3d correspondence revisited

    DOE PAGES

    Chung, Hee -Joong; Dimofte, Tudor; Gukov, Sergei; ...

    2016-04-21

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d N = 2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. As a result, we also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  18. Free viewpoint video generation based on coding information of H.264/AVC

    NASA Astrophysics Data System (ADS)

    Lin, Chi-Kun; Hung, Yu-Chen; Tang, Chia-Tong; Hwang, Jenq-Neng; Yang, Jar-Ferr

    2010-07-01

    Free viewpoint television (FTV) is a new technology that allows viewers to change view angles freely while watching TV programs. FTV requires a strong support of multi-view video codec (MVC), such as H.264/MVC defined by Joint Video Team(JVT). In this paper, we propose an FTV system which can produce videos as perceived in any view angles based on limited number of viewpoint videos decoded from H.264/MVC bitstreams. In this system, the decoded disparity vectors and motion vectors are diffused to produce smooth disparity fields for virtual view reconstruction. Decoded residue data under motion compensation are used as a match criterion. The proposed system not only greatly reduces the computation burden in creating FTV, but also improve the synthesized viewing quality due to the use of quarter pixel precision of H.264.

  19. Immersive video

    NASA Astrophysics Data System (ADS)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  20. Validation of two 3-D numerical computation codes for the flows in an annular cascade of high turning angle turbine blades

    NASA Astrophysics Data System (ADS)

    Wensheng, Wang; Fengxian, Zhang; Yanji, Xu; Naixing, Chen

    This paper describes and validates two improved three-dimensional numerical methods employed for calculating the flows in an annular cascade of high turning angle turbine blades tested by the authors in the annular cascade wind tunnel of the Institute of Engineering Thermophysics. Comparisons between the predictions and measurements were made on the static pressure contours of blade pressure and suction surfaces, and the spanwise distributions of pitchwise area-averaged static pressure coefficient and flow angle in the downstream of the cascade. The agreement between the calculated results and experimental data shows good and validates the reliability and applicability of the computation codes.

  1. SHAPEMOL: a 3D code for calculating CO line emission in planetary and protoplanetary nebulae. Detailed model-fitting of the complex nebula NGC 6302

    NASA Astrophysics Data System (ADS)

    Santander-García, M.; Bujarrabal, V.; Koning, N.; Steffen, W.

    2015-01-01

    Context. Modern instrumentation in radioastronomy constitutes a valuable tool for studying the Universe: ALMA has reached unprecedented sensitivities and spatial resolution, while Herschel/HIFI has opened a new window (most of the sub-mm and far-infrared ranges are only accessible from space) for probing molecular warm gas (~50-1000 K). On the other hand, the software SHAPE has emerged in the past few years as a standard tool for determining the morphology and velocity field of different kinds of gaseous emission nebulae via spatio-kinematical modelling. Standard SHAPE implements radiative transfer solving, but it is only available for atomic species and not for molecules. Aims: Being aware of the growing importance of the development of tools for simplifying the analyses of molecular data from new-era observatories, we introduce the computer code shapemol, a complement to SHAPE, with which we intend to fill the so-far under-developed molecular niche. Methods: shapemol enables user-friendly, spatio-kinematic modelling with accurate non-LTE calculations of excitation and radiative transfer in CO lines. Currently, it allows radiative transfer solving in the 12CO and 13CO J = 1-0 to J = 17-16 lines, but its implementation permits easily extending the code to different transitions and other molecular species, either by the code developers or by the user. Used along SHAPE, shapemol allows easily generating synthetic maps to test against interferometric observations, as well as synthetic line profiles to match single-dish observations. Results: We give a full description of how shapemol works, and we discuss its limitations and the sources of uncertainty to be expected in the final synthetic profiles or maps. As an example of the power and versatility of shapemol, we build a model of the molecular envelope of the planetary nebula NGC 6302 and compare it with 12CO and 13CO J = 2-1 interferometric maps from SMA and high-J transitions from Herschel/HIFI. We find the

  2. Perceptual quality-regulable video coding system with region-based rate control scheme.

    PubMed

    Wu, Guan-Lin; Fu, Yu-Jie; Huang, Sheng-Chieh; Chien, Shao-Yi

    2013-06-01

    In this paper, we discuss a region-based perceptual quality-regulable H.264 video encoder system that we developed. The ability to adjust the quality of specific regions of a source video to a predefined level of quality is an essential technique for region-based video applications. We use the structural similarity index as the quality metric for distortion-quantization modeling and develop a bit allocation and rate control scheme for enhancing regional perceptual quality. Exploiting the relationship between the reconstructed macroblock and the best predicted macroblock from mode decision, a novel quantization parameter prediction method is built and used to achieve the target video quality of the processed macroblock. Experimental results show that the system model has only 0.013 quality error in average. Moreover, the proposed region-based rate control system can encode video well under a bitrate constraint with a 0.1% bitrate error in average. For the situation of the low bitrate constraint, the proposed system can encode video with a 0.5% bit error rate in average and enhance the quality of the target regions.

  3. Fast luminance and chrominance correction based on motion compensated linear regression for multi-view video coding

    NASA Astrophysics Data System (ADS)

    Chen, Wei-Yin; Ding, Li-Fu; Chen, Liang-Gee

    2007-01-01

    Luminance and chrominance correction (LCC) is important in multi-view video coding (MVC) because it provides better rate-distortion performance when encoding video sequences captured by ill-calibrated multi-view cameras. This paper presents a robust and fast LCC algorithm based on motion compensated linear regression which reuses the motion information from the encoder. We adopt the linear weighted prediction model in H.264/AVC as our LCC model. In our experiments, the proposed LCC algorithm outperforms basic histogram matching method up to 0.4dB with only few computational overhead and zero external memory bandwidth. So, the dataflow of this method is suitable for low bandwidth/low power VLSI design for future multi-view applications.

  4. RT3D tutorials for GMS users

    SciTech Connect

    Clement, T.P.; Jones, N.L.

    1998-02-01

    RT3D (Reactive Transport in 3-Dimensions) is a computer code that solves coupled partial differential equations that describe reactive-flow and transport of multiple mobile and/or immobile species in a three dimensional saturated porous media. RT3D was developed from the single-species transport code, MT3D (DoD-1.5, 1997 version). As with MT3D, RT3D also uses the USGS groundwater flow model MODFLOW for computing spatial and temporal variations in groundwater head distribution. This report presents a set of tutorial problems that are designed to illustrate how RT3D simulations can be performed within the Department of Defense Groundwater Modeling System (GMS). GMS serves as a pre- and post-processing interface for RT3D. GMS can be used to define all the input files needed by RT3D code, and later the code can be launched from within GMS and run as a separate application. Once the RT3D simulation is completed, the solution can be imported to GMS for graphical post-processing. RT3D v1.0 supports several reaction packages that can be used for simulating different types of reactive contaminants. Each of the tutorials, described below, provides training on a different RT3D reaction package. Each reaction package has different input requirements, and the tutorials are designed to describe these differences. Furthermore, the tutorials illustrate the various options available in GMS for graphical post-processing of RT3D results. Users are strongly encouraged to complete the tutorials before attempting to use RT3D and GMS on a routine basis.

  5. HST3D; a computer code for simulation of heat and solute transport in three-dimensional ground-water flow systems

    USGS Publications Warehouse

    Kipp, K.L.

    1987-01-01

    The Heat- and Soil-Transport Program (HST3D) simulates groundwater flow and associated heat and solute transport in three dimensions. The three governing equations are coupled through the interstitial pore velocity, the dependence of the fluid density on pressure, temperature, the solute-mass fraction , and the dependence of the fluid viscosity on temperature and solute-mass fraction. The solute transport equation is for only a single, solute species with possible linear equilibrium sorption and linear decay. Finite difference techniques are used to discretize the governing equations using a point-distributed grid. The flow-, heat- and solute-transport equations are solved , in turn, after a particle Gauss-reduction scheme is used to modify them. The modified equations are more tightly coupled and have better stability for the numerical solutions. The basic source-sink term represents wells. A complex well flow model may be used to simulate specified flow rate and pressure conditions at the land surface or within the aquifer, with or without pressure and flow rate constraints. Boundary condition types offered include specified value, specified flux, leakage, heat conduction, and approximate free surface, and two types of aquifer influence functions. All boundary conditions can be functions of time. Two techniques are available for solution of the finite difference matrix equations. One technique is a direct-elimination solver, using equations reordered by alternating diagonal planes. The other technique is an iterative solver, using two-line successive over-relaxation. A restart option is available for storing intermediate results and restarting the simulation at an intermediate time with modified boundary conditions. This feature also can be used as protection against computer system failure. Data input and output may be in metric (SI) units or inch-pound units. Output may include tables of dependent variables and parameters, zoned-contour maps, and plots of the

  6. Real-Time Motion Capture Toolbox (RTMocap): an open-source code for recording 3-D motion kinematics to study action-effect anticipations during motor and social interactions.

    PubMed

    Lewkowicz, Daniel; Delevoye-Turrell, Yvonne

    2016-03-01

    We present here a toolbox for the real-time motion capture of biological movements that runs in the cross-platform MATLAB environment (The MathWorks, Inc., Natick, MA). It provides instantaneous processing of the 3-D movement coordinates of up to 20 markers at a single instant. Available functions include (1) the setting of reference positions, areas, and trajectories of interest; (2) recording of the 3-D coordinates for each marker over the trial duration; and (3) the detection of events to use as triggers for external reinforcers (e.g., lights, sounds, or odors). Through fast online communication between the hardware controller and RTMocap, automatic trial selection is possible by means of either a preset or an adaptive criterion. Rapid preprocessing of signals is also provided, which includes artifact rejection, filtering, spline interpolation, and averaging. A key example is detailed, and three typical variations are developed (1) to provide a clear understanding of the importance of real-time control for 3-D motion in cognitive sciences and (2) to present users with simple lines of code that can be used as starting points for customizing experiments using the simple MATLAB syntax. RTMocap is freely available (http://sites.google.com/site/RTMocap/) under the GNU public license for noncommercial use and open-source development, together with sample data and extensive documentation.

  7. SNL3dFace

    SciTech Connect

    Russ, Trina; Koch, Mark; Koudelka, Melissa; Peters, Ralph; Little, Charles; Boehnen, Chris; Peters, Tanya

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial features of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.

  8. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  9. Watermarking 3D Objects for Verification

    DTIC Science & Technology

    1999-01-01

    signal ( audio /image/video) pro- cessing and steganography fields, and even newer to the computer graphics community. Inherently, digital watermarking of...Many view digital watermarking as a potential solution for copyright protection of valuable digital materials like CD-quality audio , publication...watermark. The object can be an image, an audio clip, a video clip, or a 3D model. Some papers discuss watermarking other forms of multime- dia data

  10. Static & Dynamic Response of 3D Solids

    SciTech Connect

    Lin, Jerry

    1996-07-15

    NIKE3D is a large deformations 3D finite element code used to obtain the resulting displacements and stresses from multi-body static and dynamic structural thermo-mechanics problems with sliding interfaces. Many nonlinear and temperature dependent constitutive models are available.

  11. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  12. Structured Set Intra Prediction With Discriminative Learning in a Max-Margin Markov Network for High Efficiency Video Coding

    PubMed Central

    Dai, Wenrui; Xiong, Hongkai; Jiang, Xiaoqian; Chen, Chang Wen

    2014-01-01

    This paper proposes a novel model on intra coding for High Efficiency Video Coding (HEVC), which simultaneously predicts blocks of pixels with optimal rate distortion. It utilizes the spatial statistical correlation for the optimal prediction based on 2-D contexts, in addition to formulating the data-driven structural interdependences to make the prediction error coherent with the probability distribution, which is desirable for successful transform and coding. The structured set prediction model incorporates a max-margin Markov network (M3N) to regulate and optimize multiple block predictions. The model parameters are learned by discriminating the actual pixel value from other possible estimates to maximize the margin (i.e., decision boundary bandwidth). Compared to existing methods that focus on minimizing prediction error, the M3N-based model adaptively maintains the coherence for a set of predictions. Specifically, the proposed model concurrently optimizes a set of predictions by associating the loss for individual blocks to the joint distribution of succeeding discrete cosine transform coefficients. When the sample size grows, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. As an internal step, we optimize the underlying Markov network structure to find states that achieve the maximal energy using expectation propagation. For validation, we integrate the proposed model into HEVC for optimal mode selection on rate-distortion optimization. The proposed prediction model obtains up to 2.85% bit rate reduction and achieves better visual quality in comparison to the HEVC intra coding. PMID:25505829

  13. Molecular evolution of VP3, VP1, 3C(pro) and 3D(pol) coding regions in coxsackievirus group A type 24 variant isolates from acute hemorrhagic conjunctivitis in 2011 in Okinawa, Japan.

    PubMed

    Nidaira, Minoru; Kuba, Yumani; Saitoh, Mika; Taira, Katsuya; Maeshiro, Noriyuki; Mahoe, Yoko; Kyan, Hisako; Takara, Taketoshi; Okano, Sho; Kudaka, Jun; Yoshida, Hiromu; Oishi, Kazunori; Kimura, Hirokazu

    2014-04-01

    A large acute hemorrhagic conjunctivitis (AHC) outbreak occurred in 2011 in Okinawa Prefecture in Japan. Ten strains of coxsackievirus group A type 24 variant (CA24v) were isolated from patients with AHC and full sequence analysis of the VP3, VP1, 3C(pro) and 3D(pol) coding regions performed. To assess time-scale evolution, phylogenetic analysis was performed using the Bayesian Markov chain Monte Carlo method. In addition, similarity plots were constructed and pairwise distance (p-distance) and positive pressure analyses performed. A phylogenetic tree based on the VP1 coding region showed that the present strains belong to genotype 4 (G4). In addition, the present strains could have divided in about 2010 from the same lineages detected in other countries such as China, India and Australia. The mean rates of molecular evolution of four coding regions were estimated at about 6.15 to 7.86 × 10(-3) substitutions/site/year. Similarity plot analyses suggested that nucleotide similarities between the present strains and a prototype strain (EH24/70 strain) were 0.77-0.94. The p-distance of the present strains was relatively short (<0.01). Only one positive selected site (L25H) was identified in the VP1 protein. These findings suggest that the present CA24v strains causing AHC are genetically related to other AHC strains with rapid evolution and emerged in around 2010.

  14. An Improved Version of TOPAZ 3D

    SciTech Connect

    Krasnykh, Anatoly

    2003-07-29

    An improved version of the TOPAZ 3D gun code is presented as a powerful tool for beam optics simulation. In contrast to the previous version of TOPAZ 3D, the geometry of the device under test is introduced into TOPAZ 3D directly from a CAD program, such as Solid Edge or AutoCAD. In order to have this new feature, an interface was developed, using the GiD software package as a meshing code. The article describes this method with two models to illustrate the results.

  15. Explicit 3-D Hydrodynamic FEM Program

    SciTech Connect

    2000-11-07

    DYNA3D is a nonlinear explicit finite element code for analyzing 3-D structures and solid continuum. The code is vectorized and available on several computer platforms. The element library includes continuum, shell, beam, truss and spring/damper elements to allow maximum flexibility in modeling physical problems. Many materials are available to represent a wide range of material behavior, including elasticity, plasticity, composites, thermal effects and rate dependence. In addition, DYNA3D has a sophisticated contact interface capability, including frictional sliding, single surface contact and automatic contact generation.

  16. 3-D transient analysis of pebble-bed HTGR by TORT-TD/ATTICA3D

    SciTech Connect

    Seubert, A.; Sureda, A.; Lapins, J.; Buck, M.; Bader, J.; Laurien, E.

    2012-07-01

    As most of the acceptance criteria are local core parameters, application of transient 3-D fine mesh neutron transport and thermal hydraulics coupled codes is mandatory for best estimate evaluations of safety margins. This also applies to high-temperature gas cooled reactors (HTGR). Application of 3-D fine-mesh transient transport codes using few energy groups coupled with 3-D thermal hydraulics codes becomes feasible in view of increasing computing power. This paper describes the discrete ordinates based coupled code system TORT-TD/ATTICA3D that has recently been extended by a fine-mesh diffusion solver. Based on transient analyses for the PBMR-400 design, the transport/diffusion capabilities are demonstrated and 3-D local flux and power redistribution effects during a partial control rod withdrawal are shown. (authors)

  17. BEAMS3D Neutral Beam Injection Model

    SciTech Connect

    Lazerson, Samuel

    2014-04-14

    With the advent of applied 3D fi elds in Tokamaks and modern high performance stellarators, a need has arisen to address non-axisymmetric effects on neutral beam heating and fueling. We report on the development of a fully 3D neutral beam injection (NBI) model, BEAMS3D, which addresses this need by coupling 3D equilibria to a guiding center code capable of modeling neutral and charged particle trajectories across the separatrix and into the plasma core. Ionization, neutralization, charge-exchange, viscous velocity reduction, and pitch angle scattering are modeled with the ADAS atomic physics database [1]. Benchmark calculations are presented to validate the collisionless particle orbits, neutral beam injection model, frictional drag, and pitch angle scattering effects. A calculation of neutral beam heating in the NCSX device is performed, highlighting the capability of the code to handle 3D magnetic fields.

  18. CASTOR3D: linear stability studies for 2D and 3D tokamak equilibria

    NASA Astrophysics Data System (ADS)

    Strumberger, E.; Günter, S.

    2017-01-01

    The CASTOR3D code, which is currently under development, is able to perform linear stability studies for 2D and 3D, ideal and resistive tokamak equilibria in the presence of ideal and resistive wall structures and coils. For these computations ideal equilibria represented by concentric nested flux surfaces serve as input (e.g. computed with the NEMEC code). Solving an extended eigenvalue problem, the CASTOR3D code takes simultaneously plasma inertia and wall resistivity into account. The code is a hybrid of the CASTOR_3DW stability code and the STARWALL code. The former is an extended version of the CASTOR and CASTOR_FLOW code, respectively. The latter is a linear 3D code computing the growth rates of resistive wall modes in the presence of multiply-connected wall structures. The CASTOR_3DW code, and some parts of the STARWALL code have been reformulated in a general 3D flux coordinate representation that allows to choose between various types of flux coordinates. Furthermore, the implemented many-valued current potentials in the STARWALL part allow a correct treatment of the m  =  0, n  =  0 perturbation. In this paper, we outline the theoretical concept, and present some numerical results which illustrate the present status of the code and demonstrate its numerous application possibilities.

  19. Prediction accuracy in estimating joint angle trajectories using a video posture coding method for sagittal lifting tasks.

    PubMed

    Chang, Chien-Chi; McGorry, Raymond W; Lin, Jia-Hua; Xu, Xu; Hsiang, Simon M

    2010-08-01

    This study investigated prediction accuracy of a video posture coding method for lifting joint trajectory estimation. From three filming angles, the coder selected four key snapshots, identified joint angles and then a prediction program estimated the joint trajectories over the course of a lift. Results revealed a limited range of differences of joint angles (elbow, shoulder, hip, knee, ankle) between the manual coding method and the electromagnetic motion tracking system approach. Lifting range significantly affected estimate accuracy for all joints and camcorder filming angle had a significant effect on all joints but the hip. Joint trajectory predictions were more accurate for knuckle-to-shoulder lifts than for floor-to-shoulder or floor-to-knuckle lifts with average root mean square errors (RMSE) of 8.65 degrees , 11.15 degrees and 11.93 degrees , respectively. Accuracy was also greater for the filming angles orthogonal to the participant's sagittal plane (RMSE = 9.97 degrees ) as compared to filming angles of 45 degrees (RMSE = 11.01 degrees ) or 135 degrees (10.71 degrees ). The effects of lifting speed and loading conditions were minimal. To further increase prediction accuracy, improved prediction algorithms and/or better posture matching methods should be investigated. STATEMENT OF RELEVANCE: Observation and classification of postures are common steps in risk assessment of manual materials handling tasks. The ability to accurately predict lifting patterns through video coding can provide ergonomists with greater resolution in characterising or assessing the lifting tasks than evaluation based solely on sampling with a single lifting posture event.

  20. Ex-vessel neutron dosimetry analysis for westinghouse 4-loop XL pressurized water reactor plant using the RadTrack{sup TM} Code System with the 3D parallel discrete ordinates code RAPTOR-M3G

    SciTech Connect

    Chen, J.; Alpan, F. A.; Fischer, G.A.; Fero, A.H.

    2011-07-01

    Traditional two-dimensional (2D)/one-dimensional (1D) SYNTHESIS methodology has been widely used to calculate fast neutron (>1.0 MeV) fluence exposure to reactor pressure vessel in the belt-line region. However, it is expected that this methodology cannot provide accurate fast neutron fluence calculation at elevations far above or below the active core region. A three-dimensional (3D) parallel discrete ordinates calculation for ex-vessel neutron dosimetry on a Westinghouse 4-Loop XL Pressurized Water Reactor has been done. It shows good agreement between the calculated results and measured results. Furthermore, the results show very different fast neutron flux values at some of the former plate locations and elevations above and below an active core than those calculated by a 2D/1D SYNTHESIS method. This indicates that for certain irregular reactor internal structures, where the fast neutron flux has a very strong local effect, it is required to use a 3D transport method to calculate accurate fast neutron exposure. (authors)

  1. Uniform Local Binary Pattern Based Texture-Edge Feature for 3D Human Behavior Recognition.

    PubMed

    Ming, Yue; Wang, Guangchao; Fan, Chunxiao

    2015-01-01

    With the rapid development of 3D somatosensory technology, human behavior recognition has become an important research field. Human behavior feature analysis has evolved from traditional 2D features to 3D features. In order to improve the performance of human activity recognition, a human behavior recognition method is proposed, which is based on a hybrid texture-edge local pattern coding feature extraction and integration of RGB and depth videos information. The paper mainly focuses on background subtraction on RGB and depth video sequences of behaviors, extracting and integrating historical images of the behavior outlines, feature extraction and classification. The new method of 3D human behavior recognition has achieved the rapid and efficient recognition of behavior videos. A large number of experiments show that the proposed method has faster speed and higher recognition rate. The recognition method has good robustness for different environmental colors, lightings and other factors. Meanwhile, the feature of mixed texture-edge uniform local binary pattern can be used in most 3D behavior recognition.

  2. AE3D

    SciTech Connect

    Spong, Donald A

    2016-06-20

    AE3D solves for the shear Alfven eigenmodes and eigenfrequencies in a torodal magnetic fusion confinement device. The configuration can be either 2D (e.g. tokamak, reversed field pinch) or 3D (e.g. stellarator,