Multirate 3-D subband coding of video.
Taubman, D; Zakhor, A
1994-01-01
We propose a full color video compression strategy, based on 3-D subband coding with camera pan compensation, to generate a single embedded bit stream supporting multiple decoder display formats and a wide, finely gradated range of bit rates. An experimental implementation of our algorithm produces a single bit stream, from which suitable subsets are extracted to be compatible with many decoder frame sizes and frame rates and to satisfy transmission bandwidth constraints ranging from several tens of kilobits per second to several megabits per second. Reconstructed video quality from any of these bit stream subsets is often found to exceed that obtained from an MPEG-1 implementation, operated with equivalent bit rate constraints, in both perceptual quality and mean squared error. In addition, when restricted to 2-D, the algorithm produces some of the best results available in still image compression. PMID:18291953
Video coding and transmission standards for 3D television — a survey
NASA Astrophysics Data System (ADS)
Buchowicz, A.
2013-03-01
The emerging 3D television systems require effective techniques for transmission and storage of data representing a 3-D scene. The 3-D scene representations based on multiple video sequences or multiple views plus depth maps are especially important since they can be processed with existing video technologies. The review of the video coding and transmission techniques is presented in this paper.
Object-adaptive depth compensated inter prediction for depth video coding in 3D video system
NASA Astrophysics Data System (ADS)
Kang, Min-Koo; Lee, Jaejoon; Lim, Ilsoon; Ho, Yo-Sung
2011-01-01
Nowadays, the 3D video system using the MVD (multi-view video plus depth) data format is being actively studied. The system has many advantages with respect to virtual view synthesis such as an auto-stereoscopic functionality, but compression of huge input data remains a problem. Therefore, efficient 3D data compression is extremely important in the system, and problems of low temporal consistency and viewpoint correlation should be resolved for efficient depth video coding. In this paper, we propose an object-adaptive depth compensated inter prediction method to resolve the problems where object-adaptive mean-depth difference between a current block, to be coded, and a reference block are compensated during inter prediction. In addition, unique properties of depth video are exploited to reduce side information required for signaling decoder to conduct the same process. To evaluate the coding performance, we have implemented the proposed method into MVC (multiview video coding) reference software, JMVC 8.2. Experimental results have demonstrated that our proposed method is especially efficient for depth videos estimated by DERS (depth estimation reference software) discussed in the MPEG 3DV coding group. The coding gain was up to 11.69% bit-saving, and it was even increased when we evaluated it on synthesized views of virtual viewpoints.
3D video coding: an overview of present and upcoming standards
NASA Astrophysics Data System (ADS)
Merkle, Philipp; Müller, Karsten; Wiegand, Thomas
2010-07-01
An overview of existing and upcoming 3D video coding standards is given. Various different 3D video formats are available, each with individual pros and cons. The 3D video formats can be separated into two classes: video-only formats (such as stereo and multiview video) and depth-enhanced formats (such as video plus depth and multiview video plus depth). Since all these formats exist of at least two video sequences and possibly additional depth data, efficient compression is essential for the success of 3D video applications and technologies. For the video-only formats the H.264 family of coding standards already provides efficient and widely established compression algorithms: H.264/AVC simulcast, H.264/AVC stereo SEI message, and H.264/MVC. For the depth-enhanced formats standardized coding algorithms are currently being developed. New and specially adapted coding approaches are necessary, as the depth or disparity information included in these formats has significantly different characteristics than video and is not displayed directly, but used for rendering. Motivated by evolving market needs, MPEG has started an activity to develop a generic 3D video standard within the 3DVC ad-hoc group. Key features of the standard are efficient and flexible compression of depth-enhanced 3D video representations and decoupling of content creation and display requirements.
Standards-based approaches to 3D and multiview video coding
NASA Astrophysics Data System (ADS)
Sullivan, Gary J.
2009-08-01
The extension of video applications to enable 3D perception, which typically is considered to include a stereo viewing experience, is emerging as a mass market phenomenon, as is evident from the recent prevalence of 3D major cinema title releases. For high quality 3D video to become a commonplace user experience beyond limited cinema distribution, adoption of an interoperable coded 3D digital video format will be needed. Stereo-view video can also be studied as a special case of the more general technologies of multiview and "free-viewpoint" video systems. The history of standardization work on this topic is actually richer than people may typically realize. The ISO/IEC Moving Picture Experts Group (MPEG), in particular, has been developing interoperability standards to specify various such coding schemes since the advent of digital video as we know it. More recently, the ITU-T Visual Coding Experts Group (VCEG) has been involved as well in the Joint Video Team (JVT) work on development of 3D features for H.264/14496-10 Advanced Video Coding, including Multiview Video Coding (MVC) extensions. This paper surveys the prior, ongoing, and anticipated future standardization efforts on this subject to provide an overview and historical perspective on feasible approaches to 3D and multiview video coding.
Depth-based coding of MVD data for 3D video extension of H.264/AVC
NASA Astrophysics Data System (ADS)
Rusanovskyy, Dmytro; Hannuksela, Miska M.; Su, Wenyi
2013-06-01
This paper describes a novel approach of using depth information for advanced coding of associated video data in Multiview Video plus Depth (MVD)-based 3D video systems. As a possible implementation of this conception, we describe two coding tools that have been developed for H.264/AVC based 3D Video Codec as response to Moving Picture Experts Group (MPEG) Call for Proposals (CfP). These tools are Depth-based Motion Vector Prediction (DMVP) and Backward View Synthesis Prediction (BVSP). Simulation results conducted under JCT-3V/MPEG 3DV Common Test Conditions show, that proposed in this paper tools reduce bit rate of coded video data by 15% of average delta bit rate reduction, which results in 13% of bit rate savings on total for the MVD data over the state-of-the-art MVC+D coding. Moreover, presented in this paper conception of depth-based coding of video has been further developed by MPEG 3DV and JCT-3V and this work resulted in even higher compression efficiency, bringing about 20% of delta bit rate reduction on total for coded MVD data over the reference MVC+D coding. Considering significant gains, proposed in this paper coding approach can be beneficial for development of new 3D video coding standards. [Figure not available: see fulltext.
3D high-efficiency video coding for multi-view video and depth data.
Muller, Karsten; Schwarz, Heiko; Marpe, Detlev; Bartnik, Christian; Bosse, Sebastian; Brust, Heribert; Hinz, Tobias; Lakshman, Haricharan; Merkle, Philipp; Rhee, Franz Hunn; Tech, Gerhard; Winken, Martin; Wiegand, Thomas
2013-09-01
This paper describes an extension of the high efficiency video coding (HEVC) standard for coding of multi-view video and depth data. In addition to the known concept of disparity-compensated prediction, inter-view motion parameter, and inter-view residual prediction for coding of the dependent video views are developed and integrated. Furthermore, for depth coding, new intra coding modes, a modified motion compensation and motion vector coding as well as the concept of motion parameter inheritance are part of the HEVC extension. A novel encoder control uses view synthesis optimization, which guarantees that high quality intermediate views can be generated based on the decoded data. The bitstream format supports the extraction of partial bitstreams, so that conventional 2D video, stereo video, and the full multi-view video plus depth format can be decoded from a single bitstream. Objective and subjective results are presented, demonstrating that the proposed approach provides 50% bit rate savings in comparison with HEVC simulcast and 20% in comparison with a straightforward multi-view extension of HEVC without the newly developed coding tools. PMID:23715605
Impact of packet losses in scalable 3D holoscopic video coding
NASA Astrophysics Data System (ADS)
Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.
2014-05-01
Holoscopic imaging became a prospective glassless 3D technology to provide more natural 3D viewing experiences to the end user. Additionally, holoscopic systems also allow new post-production degrees of freedom, such as controlling the plane of focus or the viewing angle presented to the user. However, to successfully introduce this technology into the consumer market, a display scalable coding approach is essential to achieve backward compatibility with legacy 2D and 3D displays. Moreover, to effectively transmit 3D holoscopic content over error-prone networks, e.g., wireless networks or the Internet, error resilience techniques are required to mitigate the impact of data impairments in the user quality perception. Therefore, it is essential to deeply understand the impact of packet losses in terms of decoding video quality for the specific case of 3D holoscopic content, notably when a scalable approach is used. In this context, this paper studies the impact of packet losses when using a three-layer display scalable 3D holoscopic video coding architecture previously proposed, where each layer represents a different level of display scalability (i.e., L0 - 2D, L1 - stereo or multiview, and L2 - full 3D holoscopic). For this, a simple error concealment algorithm is used, which makes use of inter-layer redundancy between multiview and 3D holoscopic content and the inherent correlation of the 3D holoscopic content to estimate lost data. Furthermore, a study of the influence of 2D views generation parameters used in lower layers on the performance of the used error concealment algorithm is also presented.
The future of 3D and video coding in mobile and the internet
NASA Astrophysics Data System (ADS)
Bivolarski, Lazar
2013-09-01
The current Internet success has already changed our social and economic world and is still continuing to revolutionize the information exchange. The exponential increase of amount and types of data that is currently exchanged on the Internet represents significant challenge for the design of future architectures and solutions. This paper reviews the current status and trends in the design of solutions and research activities in the future Internet from point of view of managing the growth of bandwidth requirements and complexity of the multimedia that is being created and shared. Outlines the challenges that are present before the video coding and approaches to the design of standardized media formats and protocols while considering the expected convergence of multimedia formats and exchange interfaces. The rapid growth of connected mobile devices adds to the current and the future challenges in combination with the expected, in near future, arrival of multitude of connected devices. The new Internet technologies connecting the Internet of Things with wireless visual sensor networks and 3D virtual worlds requires conceptually new approaches of media content handling from acquisition to presentation in the 3D Media Internet. Accounting for the entire transmission system properties and enabling adaptation in real-time to context and content throughout the media proceeding path will be paramount in enabling the new media architectures as well as the new applications and services. The common video coding formats will need to be conceptually redesigned to allow for the implementation of the necessary 3D Media Internet features.
3-D model-based frame interpolation for distributed video coding of static scenes.
Maitre, Matthieu; Guillemot, Christine; Morin, Luce
2007-05-01
This paper addresses the problem of side information extraction for distributed coding of videos captured by a camera moving in a 3-D static environment. Examples of targeted applications are augmented reality, remote-controlled robots operating in hazardous environments, or remote exploration by drones. It explores the benefits of the structure-from-motion paradigm for distributed coding of this type of video content. Two interpolation methods constrained by the scene geometry, based either on block matching along epipolar lines or on 3-D mesh fitting, are first developed. These techniques are based on a robust algorithm for sub-pel matching of feature points, which leads to semi-dense correspondences between key frames. However, their rate-distortion (RD) performances are limited by misalignments between the side information and the actual Wyner-Ziv (WZ) frames due to the assumption of linear motion between key frames. To cope with this problem, two feature point tracking techniques are introduced, which recover the camera parameters of the WZ frames. A first technique, in which the frames remain encoded separately, performs tracking at the decoder and leads to significant RD performance gains. A second technique further improves the RD performances by allowing a limited tracking at the encoder. As an additional benefit, statistics on tracks allow the encoder to adapt the key frame frequency to the video motion content. PMID:17491456
Depth-based representations: Which coding format for 3D video broadcast applications?
NASA Astrophysics Data System (ADS)
Kerbiriou, Paul; Boisson, Guillaume; Sidibé, Korian; Huynh-Thu, Quan
2011-03-01
3D Video (3DV) delivery standardization is currently ongoing in MPEG. Now time is to choose 3DV data representation format. What is at stake is the final quality for end-users, i.e. synthesized views' visual quality. We focus on two major rival depth-based formats, namely Multiview Video plus Depth (MVD) and Layered Depth Video (LDV). MVD can be considered as the basic depth-based 3DV format, generated by disparity estimation from multiview sequences. LDV is more sophisticated, with the compaction of multiview data into color- and depth-occlusions layers. We compare final views quality using MVD2 and LDV (both containing two color channels plus two depth components) coded with MVC at various compression ratios. Depending on the format, the appropriate synthesis process is performed to generate final stereoscopic pairs. Comparisons are provided in terms of SSIM and PSNR with respect to original views and to synthesized references (obtained without compression). Eventually, LDV outperforms significantly MVD when using state-of-the-art reference synthesis algorithms. Occlusions management before encoding is advantageous in comparison with handling redundant signals at decoder side. Besides, we observe that depth quantization does not induce much loss on the final view quality until a significant degradation level. Improvements in disparity estimation and view synthesis algorithms are therefore still expected during the remaining standardization steps.
NASA Astrophysics Data System (ADS)
Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.
2013-09-01
Holoscopic imaging, also known as integral imaging, has been recently attracting the attention of the research community, as a promising glassless 3D technology due to its ability to create a more realistic depth illusion than the current stereoscopic or multiview solutions. However, in order to gradually introduce this technology into the consumer market and to efficiently deliver 3D holoscopic content to end-users, backward compatibility with legacy displays is essential. Consequently, to enable 3D holoscopic content to be delivered and presented on legacy displays, a display scalable 3D holoscopic coding approach is required. Hence, this paper presents a display scalable architecture for 3D holoscopic video coding with a three-layer approach, where each layer represents a different level of display scalability: Layer 0 - a single 2D view; Layer 1 - 3D stereo or multiview; and Layer 2 - the full 3D holoscopic content. In this context, a prediction method is proposed, which combines inter-layer prediction, aiming to exploit the existing redundancy between the multiview and the 3D holoscopic layers, with self-similarity compensated prediction (previously proposed by the authors for non-scalable 3D holoscopic video coding), aiming to exploit the spatial redundancy inherent to the 3D holoscopic enhancement layer. Experimental results show that the proposed combined prediction can improve significantly the rate-distortion performance of scalable 3D holoscopic video coding with respect to the authors' previously proposed solutions, where only inter-layer or only self-similarity prediction is used.
Topology dictionary for 3D video understanding.
Tung, Tony; Matsuyama, Takashi
2012-08-01
This paper presents a novel approach that achieves 3D video understanding. 3D video consists of a stream of 3D models of subjects in motion. The acquisition of long sequences requires large storage space (2 GB for 1 min). Moreover, it is tedious to browse data sets and extract meaningful information. We propose the topology dictionary to encode and describe 3D video content. The model consists of a topology-based shape descriptor dictionary which can be generated from either extracted patterns or training sequences. The model relies on 1) topology description and classification using Reeb graphs, and 2) a Markov motion graph to represent topology change states. We show that the use of Reeb graphs as the high-level topology descriptor is relevant. It allows the dictionary to automatically model complex sequences, whereas other strategies would require prior knowledge on the shape and topology of the captured subjects. Our approach serves to encode 3D video sequences, and can be applied for content-based description and summarization of 3D video sequences. Furthermore, topology class labeling during a learning process enables the system to perform content-based event recognition. Experiments were carried out on various 3D videos. We showcase an application for 3D video progressive summarization using the topology dictionary. PMID:22745004
GPU-based 3D lower tree wavelet video encoder
NASA Astrophysics Data System (ADS)
Galiano, Vicente; López-Granado, Otoniel; Malumbres, Manuel P.; Drummond, Leroy Anthony; Migallón, Hector
2013-12-01
The 3D-DWT is a mathematical tool of increasing importance in those applications that require an efficient processing of huge amounts of volumetric info. Other applications like professional video editing, video surveillance applications, multi-spectral satellite imaging, HQ video delivery, etc, would rather use 3D-DWT encoders to reconstruct a frame as fast as possible. In this article, we introduce a fast GPU-based encoder which uses 3D-DWT transform and lower trees. Also, we present an exhaustive analysis of the use of GPU memory. Our proposal shows good trade off between R/D, coding delay (as fast as MPEG-2 for High definition) and memory requirements (up to 6 times less memory than x264).
The Emerging MVC Standard for 3D Video Services
NASA Astrophysics Data System (ADS)
Chen, Ying; Wang, Ye-Kui; Ugur, Kemal; Hannuksela, Miska M.; Lainema, Jani; Gabbouj, Moncef
2008-12-01
Multiview video has gained a wide interest recently. The huge amount of data needed to be processed by multiview applications is a heavy burden for both transmission and decoding. The joint video team has recently devoted part of its effort to extend the widely deployed H.264/AVC standard to handle multiview video coding (MVC). The MVC extension of H.264/AVC includes a number of new techniques for improved coding efficiency, reduced decoding complexity, and new functionalities for multiview operations. MVC takes advantage of some of the interfaces and transport mechanisms introduced for the scalable video coding (SVC) extension of H.264/AVC, but the system level integration of MVC is conceptually more challenging as the decoder output may contain more than one view and can consist of any combination of the views with any temporal level. The generation of all the output views also requires careful consideration and control of the available decoder resources. In this paper, multiview applications and solutions to support generic multiview as well as 3D services are introduced. The proposed solutions, which have been adopted to the draft MVC specification, cover a wide range of requirements for 3D video related to interface, transport of the MVC bitstreams, and MVC decoder resource management. The features that have been introduced in MVC to support these solutions include marking of reference pictures, supporting for efficient view switching, structuring of the bitstream, signalling of view scalability supplemental enhancement information (SEI) and parallel decoding SEI.
A new video codec based on 3D-DTCWT and vector SPIHT
NASA Astrophysics Data System (ADS)
Xu, Ruiping; Li, Huifang; Xie, Sunyun
2011-10-01
In this paper, a new video coding system combining 3-D complex dual-tree discrete wavelet transform with vector SPIHT and arithmetic coding is proposed, and tested on standard video sequences. First the 3-D DTCWT of each color component is performed for video sequences. Then the wavelet coefficients are grouped to form vector, and successive refinement vector quantization techniques is used to quantize the groups. Finally experimental results are given. It shows that the proposed video codec provides better performance than the 3D-DTCWT and 3D-SPIHT codec, and the superior performance for the proposed sheme lies in not performing motion compensation.
TACO3D. 3-D Finite Element Heat Transfer Code
Mason, W.E.
1992-03-04
TACO3D is a three-dimensional, finite-element program for heat transfer analysis. An extension of the two-dimensional TACO program, it can perform linear and nonlinear analyses and can be used to solve either transient or steady-state problems. The program accepts time-dependent or temperature-dependent material properties, and materials may be isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions and loadings are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additional specialized features treat enclosure radiation, bulk nodes, and master/slave internal surface conditions (e.g., contact resistance). Data input via a free-field format is provided. A user subprogram feature allows for any type of functional representation of any independent variable. A profile (bandwidth) minimization option is available. The code is limited to implicit time integration for transient solutions. TACO3D has no general mesh generation capability. Rows of evenly-spaced nodes and rows of sequential elements may be generated, but the program relies on separate mesh generators for complex zoning. TACO3D does not have the ability to calculate view factors internally. Graphical representation of data in the form of time history and spatial plots is provided through links to the POSTACO and GRAPE postprocessor codes.
NASA Astrophysics Data System (ADS)
Hsu, Kung-Chuan; Brun, Todd
Transversal circuits are important components of fault-tolerant quantum computation. Several classes of quantum error-correcting codes are known to have transversal implementations of any logical Clifford operation. However, to achieve universal quantum computation, it would be helpful to have high-performance error-correcting codes that have a transversal implementation of some logical non-Clifford operation. The 3-D color codes are a class of topological codes that permit transversal implementation of the logical π / 8 -gate. The decoding problem of a 3-D color code can be understood as a graph-matching problem on a three-dimensional lattice. Whether this class of codes will be useful in terms of performance is still an open question. We investigate the decoding problem of 3-D color codes and analyze the performance of some possible decoders.
NASA Astrophysics Data System (ADS)
Rodríguez-Sánchez, Rafael; Martínez, José Luis; Cock, Jan De; Fernández-Escribano, Gerardo; Pieters, Bart; Sánchez, José L.; Claver, José M.; de Walle, Rik Van
2013-12-01
The H.264/AVC video coding standard introduces some improved tools in order to increase compression efficiency. Moreover, the multi-view extension of H.264/AVC, called H.264/MVC, adopts many of them. Among the new features, variable block-size motion estimation is one which contributes to high coding efficiency. Furthermore, it defines a different prediction structure that includes hierarchical bidirectional pictures, outperforming traditional Group of Pictures patterns in both scenarios: single-view and multi-view. However, these video coding techniques have high computational complexity. Several techniques have been proposed in the literature over the last few years which are aimed at accelerating the inter prediction process, but there are no works focusing on bidirectional prediction or hierarchical prediction. In this article, with the emergence of many-core processors or accelerators, a step forward is taken towards an implementation of an H.264/AVC and H.264/MVC inter prediction algorithm on a graphics processing unit. The results show a negligible rate distortion drop with a time reduction of up to 98% for the complete H.264/AVC encoder.
Compact 3D flash lidar video cameras and applications
NASA Astrophysics Data System (ADS)
Stettner, Roger
2010-04-01
The theory and operation of Advanced Scientific Concepts, Inc.'s (ASC) latest compact 3D Flash LIDAR Video Cameras (3D FLVCs) and a growing number of technical problems and solutions are discussed. The solutions range from space shuttle docking, planetary entry, decent and landing, surveillance, autonomous and manned ground vehicle navigation and 3D imaging through particle obscurants.
FARGO3D: Hydrodynamics/magnetohydrodynamics code
NASA Astrophysics Data System (ADS)
Benítez Llambay, Pablo; Masset, Frédéric
2015-09-01
A successor of FARGO (ascl:1102.017), FARGO3D is a versatile HD/MHD code that runs on clusters of CPUs or GPUs, with special emphasis on protoplanetary disks. FARGO3D offers Cartesian, cylindrical or spherical geometry; 1-, 2- or 3-dimensional calculations; and orbital advection (aka FARGO) for HD and MHD calculations. As in FARGO, a simple Runge-Kutta N-body solver may be used to describe the orbital evolution of embedded point-like objects. There is no need to know CUDA; users can develop new functions in C and have them translated to CUDA automatically to run on GPUs.
NASA Astrophysics Data System (ADS)
Ding, Cong; Sang, Xinzhu; Zhao, Tianqi; Yan, Binbin; Leng, Junmin; Yuan, Jinhui; Zhang, Ying
2012-11-01
Multiview video coding (MVC) is essential for applications of the auto-stereoscopic three-dimensional displays. However, the computational complexity of MVC encoders is tremendously huge. Fast algorithms are very desirable for the practical applications of MVC. Based on joint early termination , the selection of inter-view prediction and the optimization of the process of Inter8×8 modes by comparison, a fast macroblock(MB) mode selection algorithm is presented. Comparing with the full mode decision in MVC, the experimental results show that the proposed algorithm can reduce up to 78.13% on average and maximum 90.21% encoding time with a little increase in bit rates and loss in PSNR.
View synthesis techniques for 3D video
NASA Astrophysics Data System (ADS)
Tian, Dong; Lai, Po-Lin; Lopez, Patrick; Gomila, Cristina
2009-08-01
To facilitate new video applications such as three-dimensional video (3DV) and free-viewpoint video (FVV), multiple view plus depth format (MVD), which consists of both video views and the corresponding per-pixel depth images, is being investigated. Virtual views can be generated using depth image based rendering (DIBR), which takes video and the corresponding depth images as input. This paper discusses view synthesis techniques based on DIBR, which includes forward warping, blending and hole filling. Especially, we will emphasize on the techniques brought to the MPEG view synthesis reference software (VSRS). Unlike the case in the field of computer graphics, the ground truth depth images for nature content are very difficult to obtain. The estimated depth images used for view synthesis typically contain different types of noises. Some robust synthesis modes to combat against the depth errors are also presented in this paper. In addition, we briefly discuss how to use synthesis techniques with minor modifications to generate the occlusion layer information for layered depth video (LDV) data, which is another potential format for 3DV applications.
3D holoscopic video imaging system
NASA Astrophysics Data System (ADS)
Steurer, Johannes H.; Pesch, Matthias; Hahne, Christopher
2012-03-01
Since many years, integral imaging has been discussed as a technique to overcome the limitations of standard still photography imaging systems where a three-dimensional scene is irrevocably projected onto two dimensions. With the success of 3D stereoscopic movies, a huge interest in capturing three-dimensional motion picture scenes has been generated. In this paper, we present a test bench integral imaging camera system aiming to tailor the methods of light field imaging towards capturing integral 3D motion picture content. We estimate the hardware requirements needed to generate high quality 3D holoscopic images and show a prototype camera setup that allows us to study these requirements using existing technology. The necessary steps that are involved in the calibration of the system as well as the technique of generating human readable holoscopic images from the recorded data are discussed.
Efficient and high speed depth-based 2D to 3D video conversion
NASA Astrophysics Data System (ADS)
Somaiya, Amisha Himanshu; Kulkarni, Ramesh K.
2013-09-01
Stereoscopic video is the new era in video viewing and has wide applications such as medicine, satellite imaging and 3D Television. Such stereo content can be generated directly using S3D cameras. However, this approach requires expensive setup and hence converting monoscopic content to S3D becomes a viable approach. This paper proposes a depth-based algorithm for monoscopic to stereoscopic video conversion by using the y axis co-ordinates of the bottom-most pixels of foreground objects. This code can be used for arbitrary videos without prior database training. It does not face the limitations of single monocular depth cues nor does it combine depth cues, thus consuming less processing time without affecting the efficiency of the 3D video output. The algorithm, though not comparable to real-time, is faster than the other available 2D to 3D video conversion techniques in the average ratio of 1:8 to 1:20, essentially qualifying as high-speed. It is an automatic conversion scheme, hence directly gives the 3D video output without human intervention and with the above mentioned features becomes an ideal choice for efficient monoscopic to stereoscopic video conversion. [Figure not available: see fulltext.
Examination of 3D visual attention in stereoscopic video content
NASA Astrophysics Data System (ADS)
Huynh-Thu, Quan; Schiatti, Luca
2011-03-01
Recent advances in video technology and digital cinema have made it possible to produce entertaining 3D stereoscopic content that can be viewed for an extended duration without necessarily causing extreme fatigue, visual strain and discomfort. Viewers focus naturally their attention on specific areas of interest in their visual field. Visual attention is an important aspect of perception and its understanding is therefore an important aspect for the creation of 3D stereoscopic content. Most of the studies on visual attention have focused on the case of still images or 2D video. Only a very few studies have investigated eye movement patterns in 3D stereoscopic moving sequences, and how these may differ from viewing 2D video content. In this paper, we present and discuss the results of a subjective experiment that we conducted using an eye-tracking apparatus to record observers' gaze patterns. Participants were asked to watch the same set of video clips in a free-viewing task. Each clip was shown in a 3D stereoscopic version and 2D version. Our results indicate that the extent of areas of interests is not necessarily wider in 3D. We found a very strong content dependency in the difference of density and locations of fixations between 2D and 3D stereoscopic content. However, we found that saccades were overall faster and that fixation durations were overall lower when observers viewed the 3D stereoscopic version.
3D Multigroup Sn Neutron Transport Code
2001-02-14
ATTILA is a 3D multigroup transport code with arbitrary order ansotropic scatter. The transport equation is solved in first order form using a tri-linear discontinuous spatial differencing on an arbitrary tetrahedral mesh. The overall solution technique is source iteration with DSA acceleration of the scattering source. Anisotropic boundary and internal sources may be entered in the form of spherical harmonics moments. Alpha and k eigenvalue problems are allowed, as well as fixed source problems. Forwardmore » and adjoint solutions are available. Reflective, vacumn, and source boundary conditions are available. ATTILA can perform charged particle transport calculations using slowing down (CSD) terms. ATTILA can also be used to peform infra-red steady-state calculations for radiative transfer purposes.« less
3D Multigroup Sn Neutron Transport Code
McGee, John; Wareing, Todd; Pautz, Shawn
2001-02-14
ATTILA is a 3D multigroup transport code with arbitrary order ansotropic scatter. The transport equation is solved in first order form using a tri-linear discontinuous spatial differencing on an arbitrary tetrahedral mesh. The overall solution technique is source iteration with DSA acceleration of the scattering source. Anisotropic boundary and internal sources may be entered in the form of spherical harmonics moments. Alpha and k eigenvalue problems are allowed, as well as fixed source problems. Forward and adjoint solutions are available. Reflective, vacumn, and source boundary conditions are available. ATTILA can perform charged particle transport calculations using slowing down (CSD) terms. ATTILA can also be used to peform infra-red steady-state calculations for radiative transfer purposes.
Stereoscopic 3D video games and their effects on engagement
NASA Astrophysics Data System (ADS)
Hogue, Andrew; Kapralos, Bill; Zerebecki, Chris; Tawadrous, Mina; Stanfield, Brodie; Hogue, Urszula
2012-03-01
With television manufacturers developing low-cost stereoscopic 3D displays, a large number of consumers will undoubtedly have access to 3D-capable televisions at home. The availability of 3D technology places the onus on content creators to develop interesting and engaging content. While the technology of stereoscopic displays and content generation are well understood, there are many questions yet to be answered surrounding its effects on the viewer. Effects of stereoscopic display on passive viewers for film are known, however video games are fundamentally different since the viewer/player is actively (rather than passively) engaged in the content. Questions of how stereoscopic viewing affects interaction mechanics have previously been studied in the context of player performance but very few have attempted to quantify the player experience to determine whether stereoscopic 3D has a positive or negative influence on their overall engagement. In this paper we present a preliminary study of the effects stereoscopic 3D have on player engagement in video games. Participants played a video game in two conditions, traditional 2D and stereoscopic 3D and their engagement was quantified using a previously validated self-reporting tool. The results suggest that S3D has a positive effect on immersion, presence, flow, and absorption.
3D video sequence reconstruction algorithms implemented on a DSP
NASA Astrophysics Data System (ADS)
Ponomaryov, V. I.; Ramos-Diaz, E.
2011-03-01
A novel approach for 3D image and video reconstruction is proposed and implemented. This is based on the wavelet atomic functions (WAF) that have demonstrated better approximation properties in different processing problems in comparison with classical wavelets. Disparity maps using WAF are formed, and then they are employed in order to present 3D visualization using color anaglyphs. Additionally, the compression via Pth law is performed to improve the disparity map quality. Other approaches such as optical flow and stereo matching algorithm are also implemented as the comparative approaches. Numerous simulation results have justified the efficiency of the novel framework. The implementation of the proposed algorithm on the Texas Instruments DSP TMS320DM642 permits to demonstrate possible real time processing mode during 3D video reconstruction for images and video sequences.
3D Elastic Seismic Wave Propagation Code
1998-09-23
E3D is capable of simulating seismic wave propagation in a 3D heterogeneous earth. Seismic waves are initiated by earthquake, explosive, and/or other sources. These waves propagate through a 3D geologic model, and are simulated as synthetic seismograms or other graphical output.
Virtual view adaptation for 3D multiview video streaming
NASA Astrophysics Data System (ADS)
Petrovic, Goran; Do, Luat; Zinger, Sveta; de With, Peter H. N.
2010-02-01
Virtual views in 3D-TV and multi-view video systems are reconstructed images of the scene generated synthetically from the original views. In this paper, we analyze the performance of streaming virtual views over IP-networks with a limited and time-varying available bandwidth. We show that the average video quality perceived by the user can be improved with an adaptive streaming strategy aiming at maximizing the average video quality. Our adaptive 3D multi-view streaming can provide a quality improvement of 2 dB on the average - over non-adaptive streaming. We demonstrate that an optimized virtual view adaptation algorithm needs to be view-dependent and achieve an improvement of up to 0.7 dB. We analyze our adaptation strategies under dynamic available bandwidth in the network.
Real-time 3D video compression for tele-immersive environments
NASA Astrophysics Data System (ADS)
Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William
2006-01-01
Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).
[Evaluation of Motion Sickness Induced by 3D Video Clips].
Matsuura, Yasuyuki; Takada, Hiroki
2016-01-01
The use of stereoscopic images has been spreading rapidly. Nowadays, stereoscopic movies are nothing new to people. Stereoscopic systems date back to 280 A.D. when Euclid first recognized the concept of depth perception by humans. Despite the increase in the production of three-dimensional (3D) display products and many studies on stereoscopic vision, the effect of stereoscopic vision on the human body has been insufficiently understood. However, symptoms such as eye fatigue and 3D sickness have been the concerns when viewing 3D films for a prolonged period of time; therefore, it is important to consider the safety of viewing virtual 3D contents as a contribution to society. It is generally explained to the public that accommodation and convergence are mismatched during stereoscopic vision and that this is the main reason for the visual fatigue and visually induced motion sickness (VIMS) during 3D viewing. We have devised a method to simultaneously measure lens accommodation and convergence. We used this simultaneous measurement device to characterize 3D vision. Fixation distance was compared between accommodation and convergence during the viewing of 3D films with repeated measurements. Time courses of these fixation distances and their distributions were compared in subjects who viewed 2D and 3D video clips. The results indicated that after 90 s of continuously viewing 3D images, the accommodative power does not correspond to the distance of convergence. In this paper, remarks on methods to measure the severity of motion sickness induced by viewing 3D films are also given. From the epidemiological viewpoint, it is useful to obtain novel knowledge for reduction and/or prevention of VIMS. We should accumulate empirical data on motion sickness, which may contribute to the development of relevant fields in science and technology. PMID:26832611
Visual fatigue evaluation based on depth in 3D videos
NASA Astrophysics Data System (ADS)
Wang, Feng-jiao; Sang, Xin-zhu; Liu, Yangdong; Shi, Guo-zhong; Xu, Da-xiong
2013-08-01
In recent years, 3D technology has become an emerging industry. However, visual fatigue always impedes the development of 3D technology. In this paper we propose some factors affecting human perception of depth as new quality metrics. These factors are from three aspects of 3D video--spatial characteristics, temporal characteristics and scene movement characteristics. They play important roles for the viewer's visual perception. If there are many objects with a certain velocity and the scene changes fast, viewers will feel uncomfortable. In this paper, we propose a new algorithm to calculate the weight values of these factors and analyses their effect on visual fatigue.MSE (Mean Square Error) of different blocks is taken into consideration from the frame and inter-frame for 3D stereoscopic videos. The depth frame is divided into a number of blocks. There are overlapped and sharing pixels (at half of the block) in the horizontal and vertical direction. Ignoring edge information of objects in the image can be avoided. Then the distribution of all these data is indicated by kurtosis with regard of regions which human eye may mainly gaze at. Weight values can be gotten by the normalized kurtosis. When the method is used for individual depth, spatial variation can be achieved. When we use it in different frames between current and previous one, we can get temporal variation and scene movement variation. Three factors above are linearly combined, so we can get objective assessment value of 3D videos directly. The coefficients of three factors can be estimated based on the liner regression. At last, the experimental results show that the proposed method exhibits high correlation with subjective quality assessment results.
Saliency detection for videos using 3D FFT local spectra
NASA Astrophysics Data System (ADS)
Long, Zhiling; AlRegib, Ghassan
2015-03-01
Bottom-up spatio-temporal saliency detection identifies perceptually important regions of interest in video sequences. The center-surround model proves to be useful for visual saliency detection. In this work, we explore using 3D FFT local spectra as features for saliency detection within the center-surround framework. We develop a spectral location based decomposition scheme to divide a 3D FFT cube into two components, one related to temporal changes and the other related to spatial changes. Temporal saliency and spatial saliency are detected separately using features derived from each spectral component through a simple center-surround comparison method. The two detection results are then combined to yield a saliency map. We apply the same detection algorithm to different color channels (YIQ) and incorporate the results into the final saliency determination. The proposed technique is tested with the public CRCNS database. Both visual and numerical evaluations verify the promising performance of our technique.
Geometric prediction structure for multiview video coding
NASA Astrophysics Data System (ADS)
Lee, Seok; Wey, Ho-Cheon; Park, Du-Sik
2010-02-01
One of the critical issues to successful service of 3D video is how to compress huge amount of multi-view video data efficiently. In this paper, we described about geometric prediction structure for multi-view video coding. By exploiting the geometric relations between each camera pose, we can make prediction pair which maximizes the spatial correlation of each view. To analyze the relationship of each camera pose, we defined the mathematical view center and view distance in 3D space. We calculated virtual center pose by getting mean rotation matrix and mean translation vector. We proposed an algorithm for establishing the geometric prediction structure based on view center and view distance. Using this prediction structure, inter-view prediction is performed to camera pair of maximum spatial correlation. In our prediction structure, we also considered the scalability in coding and transmitting the multi-view videos. Experiments are done using JMVC (Joint Multiview Video Coding) software on MPEG-FTV test sequences. Overall performance of proposed prediction structure is measured in the PSNR and subjective image quality measure such as PSPNR.
Interface requirements to couple thermal-hydraulic codes to 3D neutronic codes
Langenbuch, S.; Austregesilo, H.; Velkov, K.
1997-07-01
The present situation of thermalhydraulics codes and 3D neutronics codes is briefly described and general considerations for coupling of these codes are discussed. Two different basic approaches of coupling are identified and their relative advantages and disadvantages are discussed. The implementation of the coupling for 3D neutronics codes in the system ATHLET is presented. Meanwhile, this interface is used for coupling three different 3D neutronics codes.
Holovideo: Real-time 3D range video encoding and decoding on GPU
NASA Astrophysics Data System (ADS)
Karpinsky, Nikolaus; Zhang, Song
2012-02-01
We present a 3D video-encoding technique called Holovideo that is capable of encoding high-resolution 3D videos into standard 2D videos, and then decoding the 2D videos back into 3D rapidly without significant loss of quality. Due to the nature of the algorithm, 2D video compression such as JPEG encoding with QuickTime Run Length Encoding (QTRLE) can be applied with little quality loss, resulting in an effective way to store 3D video at very small file sizes. We found that under a compression ratio of 134:1, Holovideo to OBJ file format, the 3D geometry quality drops at a negligible level. Several sets of 3D videos were captured using a structured light scanner, compressed using the Holovideo codec, and then uncompressed and displayed to demonstrate the effectiveness of the codec. With the use of OpenGL Shaders (GLSL), the 3D video codec can encode and decode in realtime. We demonstrated that for a video size of 512×512, the decoding speed is 28 frames per second (FPS) with a laptop computer using an embedded NVIDIA GeForce 9400 m graphics processing unit (GPU). Encoding can be done with this same setup at 18 FPS, making this technology suitable for applications such as interactive 3D video games and 3D video conferencing.
Video coding with dynamic background
NASA Astrophysics Data System (ADS)
Paul, Manoranjan; Lin, Weisi; Lau, Chiew Tong; Lee, Bu-Sung
2013-12-01
Motion estimation (ME) and motion compensation (MC) using variable block size, sub-pixel search, and multiple reference frames (MRFs) are the major reasons for improved coding performance of the H.264 video coding standard over other contemporary coding standards. The concept of MRFs is suitable for repetitive motion, uncovered background, non-integer pixel displacement, lighting change, etc. The requirement of index codes of the reference frames, computational time in ME & MC, and memory buffer for coded frames limits the number of reference frames used in practical applications. In typical video sequences, the previous frame is used as a reference frame with 68-92% of cases. In this article, we propose a new video coding method using a reference frame [i.e., the most common frame in scene (McFIS)] generated by dynamic background modeling. McFIS is more effective in terms of rate-distortion and computational time performance compared to the MRFs techniques. It has also inherent capability of scene change detection (SCD) for adaptive group of picture (GOP) size determination. As a result, we integrate SCD (for GOP determination) with reference frame generation. The experimental results show that the proposed coding scheme outperforms the H.264 video coding with five reference frames and the two relevant state-of-the-art algorithms by 0.5-2.0 dB with less computational time.
Multiview-video-plus-depth coding based on the advanced video coding standard.
Hannuksela, Miska M; Rusanovskyy, Dmytro; Su, Wenyi; Chen, Lulu; Li, Ri; Aflaki, Payman; Lan, Deyan; Joachimiak, Michal; Li, Houqiang; Gabbouj, Moncef
2013-09-01
This paper presents a multiview-video-plus-depth coding scheme, which is compatible with the advanced video coding (H.264/AVC) standard and its multiview video coding (MVC) extension. This scheme introduces several encoding and in-loop coding tools for depth and texture video coding, such as depth-based texture motion vector prediction, depth-range-based weighted prediction, joint inter-view depth filtering, and gradual view refresh. The presented coding scheme is submitted to the 3D video coding (3DV) call for proposals (CfP) of the Moving Picture Experts Group standardization committee. When measured with commonly used objective metrics against the MVC anchor, the proposed scheme provides an average bitrate reduction of 26% and 35% for the 3DV CfP test scenarios with two and three views, respectively. The observed bitrate reduction is similar according to an analysis of the results obtained for the subjective tests on the 3DV CfP submissions. PMID:23797252
A modular cross-platform GPU-based approach for flexible 3D video playback
NASA Astrophysics Data System (ADS)
Olsson, Roger; Andersson, Håkan; Sjöström, Mårten
2011-03-01
Different compression formats for stereo and multiview based 3D video is being standardized and software players capable of decoding and presenting these formats onto different display types is a vital part in the commercialization and evolution of 3D video. However, the number of publicly available software video players capable of decoding and playing multiview 3D video is still quite limited. This paper describes the design and implementation of a GPU-based real-time 3D video playback solution, built on top of cross-platform, open source libraries for video decoding and hardware accelerated graphics. A software architecture is presented that efficiently process and presents high definition 3D video in real-time and in a flexible manner support both current 3D video formats and emerging standards. Moreover, a set of bottlenecks in the processing of 3D video content in a GPU-based real-time 3D video playback solution is identified and discussed.
Recent update of the RPLUS2D/3D codes
NASA Technical Reports Server (NTRS)
Tsai, Y.-L. Peter
1991-01-01
The development of the RPLUS2D/3D codes is summarized. These codes utilize LU algorithms to solve chemical non-equilibrium flows in a body-fitted coordinate system. The motivation behind the development of these codes is the need to numerically predict chemical non-equilibrium flows for the National AeroSpace Plane Program. Recent improvements include vectorization method, blocking algorithms for geometric flexibility, out-of-core storage for large-size problems, and an LU-SW/UP combination for CPU-time efficiency and solution quality.
RELAP5-3D code validation for RBMK phenomena
Fisher, J.E.
1999-09-01
The RELAP5-3D thermal-hydraulic code was assessed against Japanese Safety Experiment Loop (SEL) and Heat Transfer Loop (HTL) tests. These tests were chosen because the phenomena present are applicable to analyses of Russian RBMK reactor designs. The assessment cases included parallel channel flow fluctuation tests at reduced and normal water levels, a channel inlet pipe rupture test, and a high power, density wave oscillation test. The results showed that RELAP5-3D has the capability to adequately represent these RBMK-related phenomena.
RELAP5-3D Code Validation for RBMK Phenomena
Fisher, James Ebberly
1999-09-01
The RELAP5-3D thermal-hydraulic code was assessed against Japanese Safety Experiment Loop (SEL) and Heat Transfer Loop (HTL) tests. These tests were chosen because the phenomena present are applicable to analyses of Russian RBMK reactor designs. The assessment cases included parallel channel flow fluctuation tests at reduced and normal water levels, a channel inlet pipe rupture test, and a high power, density wave oscillation test. The results showed that RELAP5-3D has the capability to adequately represent these RBMK-related phenomena.
NASA Astrophysics Data System (ADS)
Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos
2014-05-01
This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.
VISRAD, 3-D Target Design and Radiation Simulation Code
NASA Astrophysics Data System (ADS)
Li, Yingjie; Macfarlane, Joseph; Golovkin, Igor
2015-11-01
The 3-D view factor code VISRAD is widely used in designing HEDP experiments at major laser and pulsed-power facilities, including NIF, OMEGA, OMEGA-EP, ORION, LMJ, Z, and PLX. It simulates target designs by generating a 3-D grid of surface elements, utilizing a variety of 3-D primitives and surface removal algorithms, and can be used to compute the radiation flux throughout the surface element grid by computing element-to-element view factors and solving power balance equations. Target set-up and beam pointing are facilitated by allowing users to specify positions and angular orientations using a variety of coordinates systems (e.g., that of any laser beam, target component, or diagnostic port). Analytic modeling for laser beam spatial profiles for OMEGA DPPs and NIF CPPs is used to compute laser intensity profiles throughout the grid of surface elements. We will discuss recent improvements to the software package and plans for future developments.
Rate-constrained 3D surface estimation from noise-corrupted multiview depth videos.
Sun, Wenxiu; Cheung, Gene; Chou, Philip A; Florencio, Dinei; Zhang, Cha; Au, Oscar C
2014-07-01
Transmitting compactly represented geometry of a dynamic 3D scene from a sender can enable a multitude of imaging functionalities at a receiver, such as synthesis of virtual images at freely chosen viewpoints via depth-image-based rendering. While depth maps—projections of 3D geometry onto 2D image planes at chosen camera viewpoints-can nowadays be readily captured by inexpensive depth sensors, they are often corrupted by non-negligible acquisition noise. Given depth maps need to be denoised and compressed at the encoder for efficient network transmission to the decoder, in this paper, we consider the denoising and compression problems jointly, arguing that doing so will result in a better overall performance than the alternative of solving the two problems separately in two stages. Specifically, we formulate a rate-constrained estimation problem, where given a set of observed noise-corrupted depth maps, the most probable (maximum a posteriori (MAP)) 3D surface is sought within a search space of surfaces with representation size no larger than a prespecified rate constraint. Our rate-constrained MAP solution reduces to the conventional unconstrained MAP 3D surface reconstruction solution if the rate constraint is loose. To solve our posed rate-constrained estimation problem, we propose an iterative algorithm, where in each iteration the structure (object boundaries) and the texture (surfaces within the object boundaries) of the depth maps are optimized alternately. Using the MVC codec for compression of multiview depth video and MPEG free viewpoint video sequences as input, experimental results show that rate-constrained estimated 3D surfaces computed by our algorithm can reduce coding rate of depth maps by up to 32% compared with unconstrained estimated surfaces for the same quality of synthesized virtual views at the decoder. PMID:24876124
Beam Optics Analysis - An Advanced 3D Trajectory Code
Ives, R. Lawrence; Bui, Thuc; Vogler, William; Neilson, Jeff; Read, Mike; Shephard, Mark; Bauer, Andrew; Datta, Dibyendu; Beal, Mark
2006-01-03
Calabazas Creek Research, Inc. has completed initial development of an advanced, 3D program for modeling electron trajectories in electromagnetic fields. The code is being used to design complex guns and collectors. Beam Optics Analysis (BOA) is a fully relativistic, charged particle code using adaptive, finite element meshing. Geometrical input is imported from CAD programs generating ACIS-formatted files. Parametric data is inputted using an intuitive, graphical user interface (GUI), which also provides control of convergence, accuracy, and post processing. The program includes a magnetic field solver, and magnetic information can be imported from Maxwell 2D/3D and other programs. The program supports thermionic emission and injected beams. Secondary electron emission is also supported, including multiple generations. Work on field emission is in progress as well as implementation of computer optimization of both the geometry and operating parameters. The principle features of the program and its capabilities are presented.
Beam Optics Analysis — An Advanced 3D Trajectory Code
NASA Astrophysics Data System (ADS)
Ives, R. Lawrence; Bui, Thuc; Vogler, William; Neilson, Jeff; Read, Mike; Shephard, Mark; Bauer, Andrew; Datta, Dibyendu; Beal, Mark
2006-01-01
Calabazas Creek Research, Inc. has completed initial development of an advanced, 3D program for modeling electron trajectories in electromagnetic fields. The code is being used to design complex guns and collectors. Beam Optics Analysis (BOA) is a fully relativistic, charged particle code using adaptive, finite element meshing. Geometrical input is imported from CAD programs generating ACIS-formatted files. Parametric data is inputted using an intuitive, graphical user interface (GUI), which also provides control of convergence, accuracy, and post processing. The program includes a magnetic field solver, and magnetic information can be imported from Maxwell 2D/3D and other programs. The program supports thermionic emission and injected beams. Secondary electron emission is also supported, including multiple generations. Work on field emission is in progress as well as implementation of computer optimization of both the geometry and operating parameters. The principle features of the program and its capabilities are presented.
Streamlining of the RELAP5-3D Code
Mesina, George L; Hykes, Joshua; Guillen, Donna Post
2007-11-01
RELAP5-3D is widely used by the nuclear community to simulate general thermal hydraulic systems and has proven to be so versatile that the spectrum of transient two-phase problems that can be analyzed has increased substantially over time. To accommodate the many new types of problems that are analyzed by RELAP5-3D, both the physics and numerical methods of the code have been continuously improved. In the area of computational methods and mathematical techniques, many upgrades and improvements have been made decrease code run time and increase solution accuracy. These include vectorization, parallelization, use of improved equation solvers for thermal hydraulics and neutron kinetics, and incorporation of improved library utilities. In the area of applied nuclear engineering, expanded capabilities include boron and level tracking models, radiation/conduction enclosure model, feedwater heater and compressor components, fluids and corresponding correlations for modeling Generation IV reactor designs, and coupling to computational fluid dynamics solvers. Ongoing and proposed future developments include improvements to the two-phase pump model, conversion to FORTRAN 90, and coupling to more computer programs. This paper summarizes the general improvements made to RELAP5-3D, with an emphasis on streamlining the code infrastructure for improved maintenance and development. With all these past, present and planned developments, it is necessary to modify the code infrastructure to incorporate modifications in a consistent and maintainable manner. Modifying a complex code such as RELAP5-3D to incorporate new models, upgrade numerics, and optimize existing code becomes more difficult as the code grows larger. The difficulty of this as well as the chance of introducing errors is significantly reduced when the code is structured. To streamline the code into a structured program, a commercial restructuring tool, FOR_STRUCT, was applied to the RELAP5-3D source files. The
Towards a 3D Space Radiation Transport Code
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Tripathl, R. K.; Cicomptta, F. A.; Heinbockel, J. H.; Tweed, J.
2002-01-01
High-speed computational procedures for space radiation shielding have relied on asymptotic expansions in terms of the off-axis scatter and replacement of the general geometry problem by a collection of flat plates. This type of solution was derived for application to human rated systems in which the radius of the shielded volume is large compared to the off-axis diffusion limiting leakage at lateral boundaries. Over the decades these computational codes are relatively complete and lateral diffusion effects are now being added. The analysis for developing a practical full 3D space shielding code is presented.
CALTRANS: A parallel, deterministic, 3D neutronics code
Carson, L.; Ferguson, J.; Rogers, J.
1994-04-01
Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.
Research and Technology Development for Construction of 3d Video Scenes
NASA Astrophysics Data System (ADS)
Khlebnikova, Tatyana A.
2016-06-01
For the last two decades surface information in the form of conventional digital and analogue topographic maps has been being supplemented by new digital geospatial products, also known as 3D models of real objects. It is shown that currently there are no defined standards for 3D scenes construction technologies that could be used by Russian surveying and cartographic enterprises. The issues regarding source data requirements, their capture and transferring to create 3D scenes have not been defined yet. The accuracy issues for 3D video scenes used for measuring purposes can hardly ever be found in publications. Practicability of development, research and implementation of technology for construction of 3D video scenes is substantiated by 3D video scene capability to expand the field of data analysis application for environmental monitoring, urban planning, and managerial decision problems. The technology for construction of 3D video scenes with regard to the specified metric requirements is offered. Technique and methodological background are recommended for this technology used to construct 3D video scenes based on DTM, which were created by satellite and aerial survey data. The results of accuracy estimation of 3D video scenes are presented.
Axisymmetric Implementation for 3D-Based DSMC Codes
NASA Technical Reports Server (NTRS)
Stewart, Benedicte; Lumpkin, F. E.; LeBeau, G. J.
2011-01-01
The primary objective in developing NASA s DSMC Analysis Code (DAC) was to provide a high fidelity modeling tool for 3D rarefied flows such as vacuum plume impingement and hypersonic re-entry flows [1]. The initial implementation has been expanded over time to offer other capabilities including a novel axisymmetric implementation. Because of the inherently 3D nature of DAC, this axisymmetric implementation uses a 3D Cartesian domain and 3D surfaces. Molecules are moved in all three dimensions but their movements are limited by physical walls to a small wedge centered on the plane of symmetry (Figure 1). Unfortunately, far from the axis of symmetry, the cell size in the direction perpendicular to the plane of symmetry (the Z-direction) may become large compared to the flow mean free path. This frequently results in inaccuracies in these regions of the domain. A new axisymmetric implementation is presented which aims to solve this issue by using Bird s approach for the molecular movement while preserving the 3D nature of the DAC software [2]. First, the computational domain is similar to that previously used such that a wedge must still be used to define the inflow surface and solid walls within the domain. As before molecules are created inside the inflow wedge triangles but they are now rotated back to the symmetry plane. During the move step, molecules are moved in 3D but instead of interacting with the wedge walls, the molecules are rotated back to the plane of symmetry at the end of the move step. This new implementation was tested for multiple flows over axisymmetric shapes, including a sphere, a cone, a double cone and a hollow cylinder. Comparisons to previous DSMC solutions and experiments, when available, are made.
Transferring of speech movements from video to 3D face space.
Pei, Yuru; Zha, Hongbin
2007-01-01
We present a novel method for transferring speech animation recorded in low quality videos to high resolution 3D face models. The basic idea is to synthesize the animated faces by an interpolation based on a small set of 3D key face shapes which span a 3D face space. The 3D key shapes are extracted by an unsupervised learning process in 2D video space to form a set of 2D visemes which are then mapped to the 3D face space. The learning process consists of two main phases: 1) Isomap-based nonlinear dimensionality reduction to embed the video speech movements into a low-dimensional manifold and 2) K-means clustering in the low-dimensional space to extract 2D key viseme frames. Our main contribution is that we use the Isomap-based learning method to extract intrinsic geometry of the speech video space and thus to make it possible to define the 3D key viseme shapes. To do so, we need only to capture a limited number of 3D key face models by using a general 3D scanner. Moreover, we also develop a skull movement recovery method based on simple anatomical structures to enhance 3D realism in local mouth movements. Experimental results show that our method can achieve realistic 3D animation effects with a small number of 3D key face models. PMID:17093336
3D Finite Element Trajectory Code with Adaptive Meshing
NASA Astrophysics Data System (ADS)
Ives, Lawrence; Bui, Thuc; Vogler, William; Bauer, Andy; Shephard, Mark; Beal, Mark; Tran, Hien
2004-11-01
Beam Optics Analysis, a new, 3D charged particle program is available and in use for the design of complex, 3D electron guns and charged particle devices. The code reads files directly from most CAD and solid modeling programs, includes an intuitive Graphical User Interface (GUI), and a robust mesh generator that is fully automatic. Complex problems can be set up, and analysis initiated in minutes. The program includes a user-friendly post processor for displaying field and trajectory data using 3D plots and images. The electrostatic solver is based on the standard nodal finite element method. The magnetostatic field solver is based on the vector finite element method and is also called during the trajectory simulation process to solve for self magnetic fields. The user imports the geometry from essentially any commercial CAD program and uses the GUI to assign parameters (voltages, currents, dielectric constant) and designate emitters (including work function, emitter temperature, and number of trajectories). The the mesh is generated automatically and analysis is performed, including mesh adaptation to improve accuracy and optimize computational resources. This presentation will provide information on the basic structure of the code, its operation, and it's capabilities.
Depth-controlled 3D TV image coding
NASA Astrophysics Data System (ADS)
Chiari, Armando; Ciciani, Bruno; Romero, Milton; Rossi, Ricardo
1998-04-01
Conventional 3D-TV codecs processing one down-compatible (either left, or right) channel may optionally include the extraction of the disparity field associated with the stereo-pairs to support the coding of the complementary channel. A two-fold improvement over such approaches is proposed in this paper by exploiting 3D features retained in the stereo-pairs to reduce the redundancies in both channels, and according to their visual sensitiveness. Through an a-priori disparity field analysis, our coding scheme separates a region of interest from the foreground/background in the volume space reproduced in order to code them selectively based on their visual relevance. Such a region of interest is here identified as the one which is focused by the shooting device. By suitably scaling the DCT coefficient n such a way that precision is reduced for the image blocks lying on less relevant areas, our approach aims at reducing the signal energy in the background/foreground patterns, while retaining finer details on the more relevant image portions. From an implementation point of view, it is worth noticing that the system proposed keeps its surplus processing power on the encoder side only. Simulation results show such improvements as a better image quality for a given transmission bit rate, or a graceful quality degradation of the reconstructed images with decreasing data-rates.
Code portability and data management considerations in the SAS3D LMFBR accident-analysis code
Dunn, F.E.
1981-01-01
The SAS3D code was produced from a predecessor in order to reduce or eliminate interrelated problems in the areas of code portability, the large size of the code, inflexibility in the use of memory and the size of cases that can be run, code maintenance, and running speed. Many conventional solutions, such as variable dimensioning, disk storage, virtual memory, and existing code-maintenance utilities were not feasible or did not help in this case. A new data management scheme was developed, coding standards and procedures were adopted, special machine-dependent routines were written, and a portable source code processing code was written. The resulting code is quite portable, quite flexible in the use of memory and the size of cases that can be run, much easier to maintain, and faster running. SAS3D is still a large, long running code that only runs well if sufficient main memory is available.
Segmentation-based video coding
Lades, M.; Wong, Yiu-fai; Li, Qi
1995-10-01
Low bit rate video coding is gaining attention through a current wave of consumer oriented multimedia applications which aim, e.g., for video conferencing over telephone lines or for wireless communication. In this work we describe a new segmentation-based approach to video coding which belongs to a class of paradigms appearing very promising among the various proposed methods. Our method uses a nonlinear measure of local variance to identify the smooth areas in an image in a more indicative and robust fashion: First, the local minima in the variance image are identified. These minima then serve as seeds for the segmentation of the image with a watershed algorithm. Regions and their contours are extracted. Motion compensation is used to predict the change of regions between previous frames and the current frame. The error signal is then quantized. To reduce the number of regions and contours, we use the motion information to assist the segmentation process, to merge regions, resulting in a further reduction in bit rate. Our scheme has been tested and good results have been obtained.
FARGO3D: A New GPU-oriented MHD Code
NASA Astrophysics Data System (ADS)
Benítez-Llambay, Pablo; Masset, Frédéric S.
2016-03-01
We present the FARGO3D code, recently publicly released. It is a magnetohydrodynamics code developed with special emphasis on the physics of protoplanetary disks and planet-disk interactions, and parallelized with MPI. The hydrodynamics algorithms are based on finite-difference upwind, dimensionally split methods. The magnetohydrodynamics algorithms consist of the constrained transport method to preserve the divergence-free property of the magnetic field to machine accuracy, coupled to a method of characteristics for the evaluation of electromotive forces and Lorentz forces. Orbital advection is implemented, and an N-body solver is included to simulate planets or stars interacting with the gas. We present our implementation in detail and present a number of widely known tests for comparison purposes. One strength of FARGO3D is that it can run on either graphical processing units (GPUs) or central processing units (CPUs), achieving large speed-up with respect to CPU cores. We describe our implementation choices, which allow a user with no prior knowledge of GPU programming to develop new routines for CPUs, and have them translated automatically for GPUs.
MOM3D/EM-ANIMATE - MOM3D WITH ANIMATION CODE
NASA Technical Reports Server (NTRS)
Shaeffer, J. F.
1994-01-01
compare surface-current distribution due to various initial excitation directions or electric field orientations. The program can accept up to 50 planes of field data consisting of a grid of 100 by 100 field points. These planes of data are user selectable and can be viewed individually or concurrently. With these preset limits, the program requires 55 megabytes of core memory to run. These limits can be changed in the header files to accommodate the available core memory of an individual workstation. An estimate of memory required can be made as follows: approximate memory in bytes equals (number of nodes times number of surfaces times 14 variables times bytes per word, typically 4 bytes per floating point) plus (number of field planes times number of nodes per plane times 21 variables times bytes per word). This gives the approximate memory size required to store the field and surface-current data. The total memory size is approximately 400,000 bytes plus the data memory size. The animation calculations are performed in real time at any user set time step. For Silicon Graphics Workstations that have multiple processors, this program has been optimized to perform these calculations on multiple processors to increase animation rates. The optimized program uses the SGI PFA (Power FORTRAN Accelerator) library. On single processor machines, the parallelization directives are seen as comments to the program and will have no effect on compilation or execution. MOM3D and EM-ANIMATE are written in FORTRAN 77 for interactive or batch execution on SGI series computers running IRIX 3.0 or later. The RAM requirements for these programs vary with the size of the problem being solved. A minimum of 30Mb of RAM is required for execution of EM-ANIMATE; however, the code may be modified to accommodate the available memory of an individual workstation. For EM-ANIMATE, twenty-four bit, double-buffered color capability is suggested, but not required. Sample executables and sample input and
The CONV-3D code for DNS CFD calculation
NASA Astrophysics Data System (ADS)
Chudanov, Vladimir; ALCF ThermHydraX Team
2014-03-01
The CONV-3D code for DNS CFD calculation of thermal and hydrodynamics on Fast Reactor with use of supercomputers is developed. This code is highly effective in a scalability at the high performance computers such as ``Chebyshev'', ``Lomonosov'' (Moscow State University, Russia), Blue Gene/Q(ALCF MIRA, ANL). The scalability is reached up to 106 processors. The code was validated on a series of the well known tests in a wide range of Rayleigh (106-1016) and Reynolds (103-105. Such code was validated on the blind tests OECD/NEA of the turbulent intermixing in horizontal subchannels of the fuel assembly at normal pressure and temperature (Matis-H), of the flows in T-junction and the report IBRAE/ANL was published. The good coincidence of numerical predictions with experimental data was reached, that specifies applicability of the developed approach for a prediction of thermal and hydrodynamics in a boundary layer at small Prandtl that is characteristic of the liquid metal reactors. Project Name: ThermHydraX. Project Title: U.S.-Russia Collaboration on Cross-Verification and Validation in Thermal Hydraulics.
RHALE: A 3-D MMALE code for unstructured grids
Peery, J.S.; Budge, K.G.; Wong, M.K.W.; Trucano, T.G.
1993-08-01
This paper describes RHALE, a multi-material arbitrary Lagrangian-Eulerian (MMALE) shock physics code. RHALE is the successor to CTH, Sandia`s 3-D Eulerian shock physics code, and will be capable of solving problems that CTH cannot adequately address. We discuss the Lagrangian solid mechanics capabilities of RHALE, which include arbitrary mesh connectivity, superior artificial viscosity, and improved material models. We discuss the MMALE algorithms that have been extended for arbitrary grids in both two- and three-dimensions. The MMALE addition to RHALE provides the accuracy of a Lagrangian code while allowing a calculation to proceed under very large material distortions. Coupling an arbitrary quadrilateral or hexahedral grid to the MMALE solution facilitates modeling of complex shapes with a greatly reduced number of computational cells. RHALE allows regions of a problem to be modeled with Lagrangian, Eulerian or ALE meshes. In addition, regions can switch from Lagrangian to ALE to Eulerian based on user input or mesh distortion. For ALE meshes, new node locations are determined with a variety of element based equipotential schemes. Element quantities are advected with donor, van Leer, or Super-B algorithms. Nodal quantities are advected with the second order SHALE or HIS algorithms. Material interfaces are determined with a modified Young`s high resolution interface tracker or the SLIC algorithm. RHALE has been used to model many problems of interest to the mechanics, hypervelocity impact, and shock physics communities. Results of a sampling of these problems are presented in this paper.
3D reconstruction of rotational video microscope based on patches
NASA Astrophysics Data System (ADS)
Ma, Shijie; Qu, Yufu
2015-11-01
Due to the small field of view and shallow depth of field, the microscope could only capture 2D images of the object. In order to observe the three-dimensional structure of the micro object, a microscopy images reconstruction algorithm based on an improved patch-based multi-view stereo (PMVS) algorithm is proposed. The new algorithm improves PMVS from two aspects: first, increasing the propagation directions, second, on the basis of the expansion, different expansion radius and times are set by the angle between the normal vector of the seed patch and the direction vector of the line passing through the seed patch center and the camera center. Compared with PMVS, the number of 3D points made by the new algorithm is three times as much as PMVS. And the holes in the vertical side are also eliminated.
Development of 3D mobile receiver for stereoscopic video and data service in T-DMB
NASA Astrophysics Data System (ADS)
Lee, Gwangsoon; Lee, Hyun; Yun, Kugjin; Hur, Namho; Lee, Soo In
2011-02-01
In this paper, we present a development of 3D-T DMB (three-dimensional digital multimedia broadcasting) receiver for providing 3D video and data service. First, for a 3D video service, the developed receiver is capable of decoding and playing 3D AV contents that is encoded by simulcast encoding method and that is transmitted via T-DMB network. Second, the developed receiver can render stereoscopic multimedia objects delivered using MPEG-4 BIFS technology that is also employed in T-DMB. Specially, this paper introduces hardware and software architecture and its implementation of 3D T-DMB receiver. The developed 3D T-DMB receiver has capabilities of generating stereoscopic viewing on the glasses-free 3D mobile display, therefore we propose parameters for designing the 3D display, together with evaluating the viewing angle and distance through both computer simulation and actual measurement. Finally, the availability of 3D video and data service is verified using the experimental system including the implemented receiver and a variety of service examples.
Code System to Simulate 3D Tracer Dispersion in Atmosphere.
2002-01-25
Version 00 SHREDI is a shielding code system which executes removal-diffusion computations for bi-dimensional shields in r-z or x-y geometries. It may also deal with monodimensional problems (infinitely high cylinders or slabs). MESYST can simulate 3D tracer dispersion in the atmosphere. Three programs are part of this system: CRE_TOPO prepares the terrain data for MESYST. NOABL calculates three-dimensional free divergence windfields over complex terrain. PAS computes tracer concentrations and depositions on a given domain. Themore » purpose of this work is to develop a reliable simulation tool for pollutant atmospheric dispersion, which gives a realistic approach and allows one to compute the pollutant concentrations over complex terrains with good accuracy. The factional brownian model, which furnishes more accurate concentration values, is introduced to calculate pollutant atmospheric dispersion. The model was validated on SIESTA international experiments.« less
Does training with 3D videos improve decision-making in team invasion sports?
Hohmann, Tanja; Obelöer, Hilke; Schlapkohl, Nele; Raab, Markus
2016-04-01
We examined the effectiveness of video-based decision training in national youth handball teams. Extending previous research, we tested in Study 1 whether a three-dimensional (3D) video training group would outperform a two-dimensional (2D) group. In Study 2, a 3D training group was compared to a control group and a group trained with a traditional tactic board. In both studies, training duration was 6 weeks. Performance was measured in a pre- to post-retention design. The tests consisted of a decision-making task measuring quality of decisions (first and best option) and decision time (time for first and best option). The results of Study 1 showed learning effects and revealed that the 3D video group made faster first-option choices than the 2D group, but differences in the quality of options were not pronounced. The results of Study 2 revealed learning effects for both training groups compared to the control group, and faster choices in the 3D group compared to both other groups. Together, the results show that 3D video training is the most useful tool for improving choices in handball, but only in reference to decision time and not decision quality. We discuss the usefulness of a 3D video tool for training of decision-making skills outside the laboratory or gym. PMID:26207956
Scalable video transmission over Rayleigh fading channels using LDPC codes
NASA Astrophysics Data System (ADS)
Bansal, Manu; Kondi, Lisimachos P.
2005-03-01
In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.
Efficient Use of Video for 3d Modelling of Cultural Heritage Objects
NASA Astrophysics Data System (ADS)
Alsadik, B.; Gerke, M.; Vosselman, G.
2015-03-01
Currently, there is a rapid development in the techniques of the automated image based modelling (IBM), especially in advanced structure-from-motion (SFM) and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 - 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.
Current status of the WHAMS-3D code
Belytschko, T.; Kennedy, J.M.
1987-03-01
The program WHAMS-3D is an explicit time integration program which can be used for frames, shells, plates and continua in three dimensions. Both material nonlinearities due to elasto-plastic behavior and geometric nonlinearities due to large displacements can be treated. The program has been developed to serve as a test-bed for research into methods for nonlinear structural dynamics, but it can also be used for production calculations. The program is quite compact, so it can be coupled with other codes. The program employs a finite element format, so that is possesses considerable versatility in modeling complex shapes and boundary conditions. The element library consists of the following: quadrilateral and triangular plate-shell elements, a beam element, a spring element and a hexahedral continuum element. In addition, a rigid linkage is included which permits the efficient modeling of very stiff portions of a structure, such as the bottom ring of a core barrel. In a rigid linkage, the motion of a master node defines the motion of all slave nodes linked to the master node. This option is also useful for eccentrically connected elements where the modlines of the connected elements do not coincide, as for example, in stiffeners. Time integration is performed by the central difference method. The mass matrix is diagonal (lumped), so no equations need be solved. Different time steps can be used in different parts of the mesh.
A Novel 2D-to-3D Video Conversion Method Using Time-Coherent Depth Maps
Yin, Shouyi; Dong, Hao; Jiang, Guangli; Liu, Leibo; Wei, Shaojun
2015-01-01
In this paper, we propose a novel 2D-to-3D video conversion method for 3D entertainment applications. 3D entertainment is getting more and more popular and can be found in many contexts, such as TV and home gaming equipment. 3D image sensors are a new method to produce stereoscopic video content conveniently and at a low cost, and can thus meet the urgent demand for 3D videos in the 3D entertaiment market. Generally, 2D image sensor and 2D-to-3D conversion chip can compose a 3D image sensor. Our study presents a novel 2D-to-3D video conversion algorithm which can be adopted in a 3D image sensor. In our algorithm, a depth map is generated by combining global depth gradient and local depth refinement for each frame of 2D video input. Global depth gradient is computed according to image type while local depth refinement is related to color information. As input 2D video content consists of a number of video shots, the proposed algorithm reuses the global depth gradient of frames within the same video shot to generate time-coherent depth maps. The experimental results prove that this novel method can adapt to different image types, reduce computational complexity and improve the temporal smoothness of generated 3D video. PMID:26131674
A Novel 2D-to-3D Video Conversion Method Using Time-Coherent Depth Maps.
Yin, Shouyi; Dong, Hao; Jiang, Guangli; Liu, Leibo; Wei, Shaojun
2015-01-01
In this paper, we propose a novel 2D-to-3D video conversion method for 3D entertainment applications. 3D entertainment is getting more and more popular and can be found in many contexts, such as TV and home gaming equipment. 3D image sensors are a new method to produce stereoscopic video content conveniently and at a low cost, and can thus meet the urgent demand for 3D videos in the 3D entertaiment market. Generally, 2D image sensor and 2D-to-3D conversion chip can compose a 3D image sensor. Our study presents a novel 2D-to-3D video conversion algorithm which can be adopted in a 3D image sensor. In our algorithm, a depth map is generated by combining global depth gradient and local depth refinement for each frame of 2D video input. Global depth gradient is computed according to image type while local depth refinement is related to color information. As input 2D video content consists of a number of video shots, the proposed algorithm reuses the global depth gradient of frames within the same video shot to generate time-coherent depth maps. The experimental results prove that this novel method can adapt to different image types, reduce computational complexity and improve the temporal smoothness of generated 3D video. PMID:26131674
3D surface reconstruction based on image stitching from gastric endoscopic video sequence
NASA Astrophysics Data System (ADS)
Duan, Mengyao; Xu, Rong; Ohya, Jun
2013-09-01
This paper proposes a method for reconstructing 3D detailed structures of internal organs such as gastric wall from endoscopic video sequences. The proposed method consists of the four major steps: Feature-point-based 3D reconstruction, 3D point cloud stitching, dense point cloud creation and Poisson surface reconstruction. Before the first step, we partition one video sequence into groups, where each group consists of two successive frames (image pairs), and each pair in each group contains one overlapping part, which is used as a stitching region. Fist, the 3D point cloud of each group is reconstructed by utilizing structure from motion (SFM). Secondly, a scheme based on SIFT features registers and stitches the obtained 3D point clouds, by estimating the transformation matrix of the overlapping part between different groups with high accuracy and efficiency. Thirdly, we select the most robust SIFT feature points as the seed points, and then obtain the dense point cloud from sparse point cloud via a depth testing method presented by Furukawa. Finally, by utilizing Poisson surface reconstruction, polygonal patches for the internal organs are obtained. Experimental results demonstrate that the proposed method achieves a high accuracy and efficiency for 3D reconstruction of gastric surface from an endoscopic video sequence.
Moving Human Path Tracking Based on Video Surveillance in 3d Indoor Scenarios
NASA Astrophysics Data System (ADS)
Zhou, Yan; Zlatanova, Sisi; Wang, Zhe; Zhang, Yeting; Liu, Liu
2016-06-01
Video surveillance systems are increasingly used for a variety of 3D indoor applications. We can analyse human behaviour, discover and avoid crowded areas, monitor human traffic and so forth. In this paper we concentrate on use of surveillance cameras to track and reconstruct the path a person has followed. For the purpose we integrated video surveillance data with a 3D indoor model of the building and develop a single human moving path tracking method. We process the surveillance videos to detected single human moving traces; then we match the depth information of 3D scenes to the constructed 3D indoor network model and define the human traces in the 3D indoor space. Finally, the single human traces extracted from multiple cameras are connected with the help of the connectivity provided by the 3D network model. Using this approach, we can reconstruct the entire walking path. The provided experiments with a single person have verified the effectiveness and robustness of the method.
Qi, Jin; Yang, Zhiyong
2014-01-01
Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D) videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications. PMID:25473850
Qi, Jin; Yang, Zhiyong
2014-01-01
Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D) videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications. PMID:25473850
A 3D-Video-Based Computerized Analysis of Social and Sexual Interactions in Rats
Matsumoto, Jumpei; Urakawa, Susumu; Takamura, Yusaku; Malcher-Lopes, Renato; Hori, Etsuro; Tomaz, Carlos; Ono, Taketoshi; Nishijo, Hisao
2013-01-01
A large number of studies have analyzed social and sexual interactions between rodents in relation to neural activity. Computerized video analysis has been successfully used to detect numerous behaviors quickly and objectively; however, to date only 2D video recording has been used, which cannot determine the 3D locations of animals and encounters difficulties in tracking animals when they are overlapping, e.g., when mounting. To overcome these limitations, we developed a novel 3D video analysis system for examining social and sexual interactions in rats. A 3D image was reconstructed by integrating images captured by multiple depth cameras at different viewpoints. The 3D positions of body parts of the rats were then estimated by fitting skeleton models of the rats to the 3D images using a physics-based fitting algorithm, and various behaviors were recognized based on the spatio-temporal patterns of the 3D movements of the body parts. Comparisons between the data collected by the 3D system and those by visual inspection indicated that this system could precisely estimate the 3D positions of body parts for 2 rats during social and sexual interactions with few manual interventions, and could compute the traces of the 2 animals even during mounting. We then analyzed the effects of AM-251 (a cannabinoid CB1 receptor antagonist) on male rat sexual behavior, and found that AM-251 decreased movements and trunk height before sexual behavior, but increased the duration of head-head contact during sexual behavior. These results demonstrate that the use of this 3D system in behavioral studies could open the door to new approaches for investigating the neuroscience of social and sexual behavior. PMID:24205238
A 3D-video-based computerized analysis of social and sexual interactions in rats.
Matsumoto, Jumpei; Urakawa, Susumu; Takamura, Yusaku; Malcher-Lopes, Renato; Hori, Etsuro; Tomaz, Carlos; Ono, Taketoshi; Nishijo, Hisao
2013-01-01
A large number of studies have analyzed social and sexual interactions between rodents in relation to neural activity. Computerized video analysis has been successfully used to detect numerous behaviors quickly and objectively; however, to date only 2D video recording has been used, which cannot determine the 3D locations of animals and encounters difficulties in tracking animals when they are overlapping, e.g., when mounting. To overcome these limitations, we developed a novel 3D video analysis system for examining social and sexual interactions in rats. A 3D image was reconstructed by integrating images captured by multiple depth cameras at different viewpoints. The 3D positions of body parts of the rats were then estimated by fitting skeleton models of the rats to the 3D images using a physics-based fitting algorithm, and various behaviors were recognized based on the spatio-temporal patterns of the 3D movements of the body parts. Comparisons between the data collected by the 3D system and those by visual inspection indicated that this system could precisely estimate the 3D positions of body parts for 2 rats during social and sexual interactions with few manual interventions, and could compute the traces of the 2 animals even during mounting. We then analyzed the effects of AM-251 (a cannabinoid CB1 receptor antagonist) on male rat sexual behavior, and found that AM-251 decreased movements and trunk height before sexual behavior, but increased the duration of head-head contact during sexual behavior. These results demonstrate that the use of this 3D system in behavioral studies could open the door to new approaches for investigating the neuroscience of social and sexual behavior. PMID:24205238
3D video-based deformation measurement of the pelvis bone under dynamic cyclic loading
2011-01-01
Background Dynamic three-dimensional (3D) deformation of the pelvic bones is a crucial factor in the successful design and longevity of complex orthopaedic oncological implants. The current solutions are often not very promising for the patient; thus it would be interesting to measure the dynamic 3D-deformation of the whole pelvic bone in order to get a more realistic dataset for a better implant design. Therefore we hypothesis if it would be possible to combine a material testing machine with a 3D video motion capturing system, used in clinical gait analysis, to measure the sub millimetre deformation of a whole pelvis specimen. Method A pelvis specimen was placed in a standing position on a material testing machine. Passive reflective markers, traceable by the 3D video motion capturing system, were fixed to the bony surface of the pelvis specimen. While applying a dynamic sinusoidal load the 3D-movement of the markers was recorded by the cameras and afterwards the 3D-deformation of the pelvis specimen was computed. The accuracy of the 3D-movement of the markers was verified with 3D-displacement curve with a step function using a manual driven 3D micro-motion-stage. Results The resulting accuracy of the measurement system depended on the number of cameras tracking a marker. The noise level for a marker seen by two cameras was during the stationary phase of the calibration procedure ± 0.036 mm, and ± 0.022 mm if tracked by 6 cameras. The detectable 3D-movement performed by the 3D-micro-motion-stage was smaller than the noise level of the 3D-video motion capturing system. Therefore the limiting factor of the setup was the noise level, which resulted in a measurement accuracy for the dynamic test setup of ± 0.036 mm. Conclusion This 3D test setup opens new possibilities in dynamic testing of wide range materials, like anatomical specimens, biomaterials, and its combinations. The resulting 3D-deformation dataset can be used for a better estimation of material
Dhulipalla, Ravindranath; Marella, Yamuna; Katuri, Kishore Kumar; Nagamani, Penupothu; Talada, Kishore; Kakarlapudi, Anusha
2015-01-01
Background: There is limited evidence about the distinguished effect of 3D oral health education videos over conventional 2 dimensional projections in improving oral health knowledge. This randomized controlled trial was done to test the effect of 3 dimensional oral health educational videos among first year dental students. Materials and Methods: 80 first year dental students were enrolled and divided into two groups (test and control). In the test group, 3D animation and in the control group, regular 2D video projections pertaining to periodontal anatomy, etiology, presenting conditions, preventive measures and treatment of periodontal problems were shown. Effect of 3D animation was evaluated by using a questionnaire consisting of 10 multiple choice questions given to all participants at baseline, immediately after and 1month after the intervention. Clinical parameters like Plaque Index (PI), Gingival Bleeding Index (GBI), and Oral Hygiene Index Simplified (OHI-S) were measured at baseline and 1 month follow up. Results: A significant difference in the post intervention knowledge scores was found between the groups as assessed by unpaired t-test (p<0.001) at baseline, immediate and after 1 month. At baseline, all the clinical parameters in the both the groups were similar and showed a significant reduction (p<0.001)p after 1 month, whereas no significant difference was noticed post intervention between the groups. Conclusion: 3D animation videos are more effective over 2D videos in periodontal disease education and knowledge recall. The application of 3D animation results also demonstrate a better visual comprehension for students and greater health care outcomes. PMID:26759805
3D neutronic codes coupled with thermal-hydraulic system codes for PWR, and BWR and VVER reactors
Langenbuch, S.; Velkov, K.; Lizorkin, M.
1997-07-01
This paper describes the objectives of code development for coupling 3D neutronics codes with thermal-hydraulic system codes. The present status of coupling ATHLET with three 3D neutronics codes for VVER- and LWR-reactors is presented. After describing the basic features of the 3D neutronic codes BIPR-8 from Kurchatov-Institute, DYN3D from Research Center Rossendorf and QUABOX/CUBBOX from GRS, first applications of coupled codes for different transient and accident scenarios are presented. The need of further investigations is discussed.
NASA Astrophysics Data System (ADS)
Altschuler, Bruce R.; Oliver, William R.; Altschuler, Martin D.
1996-02-01
We describe a system for rapid and convenient video data acquisition and 3-D numerical coordinate data calculation able to provide precise 3-D topographical maps and 3-D archival data sufficient to reconstruct a 3-D virtual reality display of a crime scene or mass disaster area. Under a joint U.S. army/U.S. Air Force project with collateral U.S. Navy support, to create a 3-D surgical robotic inspection device -- a mobile, multi-sensor robotic surgical assistant to aid the surgeon in diagnosis, continual surveillance of patient condition, and robotic surgical telemedicine of combat casualties -- the technology is being perfected for remote, non-destructive, quantitative 3-D mapping of objects of varied sizes. This technology is being advanced with hyper-speed parallel video technology and compact, very fast laser electro-optics, such that the acquisition of 3-D surface map data will shortly be acquired within the time frame of conventional 2-D video. With simple field-capable calibration, and mobile or portable platforms, the crime scene investigator could set up and survey the entire crime scene, or portions of it at high resolution, with almost the simplicity and speed of video or still photography. The survey apparatus would record relative position, location, and instantly archive thousands of artifacts at the site with 3-D data points capable of creating unbiased virtual reality reconstructions, or actual physical replicas, for the investigators, prosecutors, and jury.
Error resiliency of distributed video coding in wireless video communication
NASA Astrophysics Data System (ADS)
Ye, Shuiming; Ouaret, Mourad; Dufaux, Frederic; Ansorge, Michael; Ebrahimi, Touradj
2008-08-01
Distributed Video Coding (DVC) is a new paradigm in video coding, based on the Slepian-Wolf and Wyner-Ziv theorems. DVC offers a number of potential advantages: flexible partitioning of the complexity between the encoder and decoder, robustness to channel errors due to intrinsic joint source-channel coding, codec independent scalability, and multi-view coding without communications between the cameras. In this paper, we evaluate the performance of DVC in an error-prone wireless communication environment. We also present a hybrid spatial and temporal error concealment approach for DVC. Finally, we perform a comparison with a state-of-the-art AVC/H.264 video coding scheme in the presence of transmission errors.
Layered Wyner-Ziv video coding.
Xu, Qian; Xiong, Zixiang
2006-12-01
Following recent theoretical works on successive Wyner-Ziv coding (WZC), we propose a practical layered Wyner-Ziv video coder using the DCT, nested scalar quantization, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information at the decoder). Our main novelty is to use the base layer of a standard scalable video coder (e.g., MPEG-4/H.26L FGS or H.263+) as the decoder side information and perform layered WZC for quality enhancement. Similar to FGS coding, there is no performance difference between layered and monolithic WZC when the enhancement bitstream is generated in our proposed coder. Using an H.26L coded version as the base layer, experiments indicate that WZC gives slightly worse performance than FGS coding when the channel (for both the base and enhancement layers) is noiseless. However, when the channel is noisy, extensive simulations of video transmission over wireless networks conforming to the CDMA2000 1X standard show that H.26L base layer coding plus Wyner-Ziv enhancement layer coding are more robust against channel errors than H.26L FGS coding. These results demonstrate that layered Wyner-Ziv video coding is a promising new technique for video streaming over wireless networks. PMID:17153952
3-D localization of gamma ray sources with coded apertures for medical applications
NASA Astrophysics Data System (ADS)
Kaissas, I.; Papadimitropoulos, C.; Karafasoulis, K.; Potiriadis, C.; Lambropoulos, C. P.
2015-09-01
Several small gamma cameras for radioguided surgery using CdTe or CdZnTe have parallel or pinhole collimators. Coded aperture imaging is a well-known method for gamma ray source directional identification, applied in astrophysics mainly. The increase in efficiency due to the substitution of the collimators by the coded masks renders the method attractive for gamma probes used in radioguided surgery. We have constructed and operationally verified a setup consisting of two CdTe gamma cameras with Modified Uniform Redundant Array (MURA) coded aperture masks of rank 7 and 19 and a video camera. The 3-D position of point-like radioactive sources is estimated via triangulation using decoded images acquired by the gamma cameras. We have also developed code for both fast and detailed simulations and we have verified the agreement between experimental results and simulations. In this paper we present a simulation study for the spatial localization of two point sources using coded aperture masks with rank 7 and 19.
The 3D Human Motion Control Through Refined Video Gesture Annotation
NASA Astrophysics Data System (ADS)
Jin, Yohan; Suk, Myunghoon; Prabhakaran, B.
In the beginning of computer and video game industry, simple game controllers consisting of buttons and joysticks were employed, but recently game consoles are replacing joystick buttons with novel interfaces such as the remote controllers with motion sensing technology on the Nintendo Wii [1] Especially video-based human computer interaction (HCI) technique has been applied to games, and the representative game is 'Eyetoy' on the Sony PlayStation 2. Video-based HCI technique has great benefit to release players from the intractable game controller. Moreover, in order to communicate between humans and computers, video-based HCI is very crucial since it is intuitive, easy to get, and inexpensive. On the one hand, extracting semantic low-level features from video human motion data is still a major challenge. The level of accuracy is really dependent on each subject's characteristic and environmental noises. Of late, people have been using 3D motion-capture data for visualizing real human motions in 3D space (e.g, 'Tiger Woods' in EA Sports, 'Angelina Jolie' in Bear-Wolf movie) and analyzing motions for specific performance (e.g, 'golf swing' and 'walking'). 3D motion-capture system ('VICON') generates a matrix for each motion clip. Here, a column is corresponding to a human's sub-body part and row represents time frames of data capture. Thus, we can extract sub-body part's motion only by selecting specific columns. Different from low-level feature values of video human motion, 3D human motion-capture data matrix are not pixel values, but is closer to human level of semantics.
International "Intercomparison of 3-Dimensional (3D) Radiation Codes" (13RC)
NASA Technical Reports Server (NTRS)
Cahalan, Robert F.; Einaudi, Franco (Technical Monitor)
2000-01-01
An international "Intercomparison of 3-dimensional (3D) Radiation Codes" 13RC) has been initiated. It is endorsed by the GEWEX Radiation Panel, and funded jointly by the United States Department of Energy ARM program, and by the National Aeronautics and Space Administration Radiation Sciences program. It is a 3-phase effort that has as its goals to: (1) understand the errors and limits of 3D methods; (2) provide 'baseline' cases for future 3D code development; (3) promote sharing of 3D tools; (4) derive guidelines for 3D tool selection; and (5) improve atmospheric science education in 3D radiation.
A Magnetic Diagnostic Code for 3D Fusion Equilibria
Samuel Aaron Lazerson
2012-07-27
A synthetic magnetic diagnostics code for fusion equilibria is presented. This code calculates the response of various magnetic diagnostics to the equilibria produced by the VMEC and PIES codes. This allows for treatment of equilibria with both good nested flux surfaces and those with stochastic regions. DIAGNO v2.0 builds upon previous codes through the implementation of a virtual casing principle. The codes is validated against a vacuum shot on the Large Helical Device where the vertical field was ramped. As an exercise of the code, the diagnostic response for various equilibria are calculated on the Large Helical Device (LHD).
A Magnetic Diagnostic Code for 3D Fusion Equilibria
Samuel A. Lazerson, S. Sakakibara and Y. Suzuki
2013-03-12
A synthetic magnetic diagnostics code for fusion equilibria is presented. This code calculates the response of various magnetic diagnostics to the equilibria produced by the VMEC and PIES codes. This allows for treatment of equilibria with both good nested flux surfaces and those with stochastic regions. DIAGNO v2.0 builds upon previous codes through the implementation of a virtual casing principle. The code is validated against a vacuum shot on the Large Helical Device (LHD) where the vertical field was ramped. As an exercise of the code, the diagnostic response for various equilibria are calculated on the LHD.
Gray coded trapezoidal fringes for 3-D surface-shape measurement
NASA Astrophysics Data System (ADS)
Pérez, Oscar G.; Flores, Jorge L.; García-Torales, G.; Muñoz-G, J. A.; Soto, Horacio; Balderas, Sandra E.
2014-09-01
We propose a two-step trapezoidal-pattern phase-shifting method for 3-D surface-shape measurements. Shape measurements by trapezoidal phase-shifting methods require high-quality trapezoidal patterns. Furthermore, most of the video projectors are nonlinear, making it difficult to generate high quality phase without nonlinearity calibration and correction. To overcome the limitations, we propose a method for synthesizing trapezoidal intensity fringes as a way to solve the problems caused by projector/camera gamma nonlinearity. The fringe generation technique consists of projecting and acquiring a temporal sequence of strictly binary color patterns (Gray code), whose (adequately weighted) average leads to trapezoidal fringe patterns with the required number of bits, which allows a reliable three-dimensional profile reconstruction using phase-shifting methods. Validation experiments are presented.
3D filtering technique in presence of additive noise in color videos implemented on DSP
NASA Astrophysics Data System (ADS)
Ponomaryov, Volodymyr I.; Montenegro-Monroy, Hector; Palacios, Alfredo
2014-05-01
A filtering method for color videos contaminated by additive noise is presented. The proposed framework employs three filtering stages: spatial similarity filtering, neighboring frame denoising, and spatial post-processing smoothing. The difference with other state-of- the-art filtering methods, is that this approach, based on fuzzy logic, analyses basic and related gradient values between neighboring pixels into a 7 fi 7 sliding window in the vicinity of a central pixel in each of the RGB channels. Following, the similarity measures between the analogous pixels in the color bands are taken into account during the denoising. Next, two neighboring video frames are analyzed together estimating local motions between the frames using block matching procedure. In the final stage, the edges and smoothed areas are processed differently in a current frame during the post-processing filtering. Numerous simulations results confirm that this 3D fuzzy filter perform better than other state-of-the- art methods, such as: 3D-LLMMSE, WMVCE, RFMDAF, FDARTF G, VBM3D and NLM, in terms of objective criteria (PSNR, MAE, NCD and SSIM) as well as subjective perception via human vision system in the different color videos. An efficiency analysis of the designed and other mentioned filters have been performed on the DSPs TMS320 DM642 and TMS320DM648 by Texas Instruments through MATLAB and Simulink module showing that the novel 3D fuzzy filter can be used in real-time processing applications.
A robust coding scheme for packet video
NASA Technical Reports Server (NTRS)
Chen, Y. C.; Sayood, Khalid; Nelson, D. J.
1991-01-01
We present a layered packet video coding algorithm based on a progressive transmission scheme. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.
A robust coding scheme for packet video
NASA Technical Reports Server (NTRS)
Chen, Yun-Chung; Sayood, Khalid; Nelson, Don J.
1992-01-01
A layered packet video coding algorithm based on a progressive transmission scheme is presented. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.
Video reframing relying on panoramic estimation based on a 3D representation of the scene
NASA Astrophysics Data System (ADS)
de Simon, Agnes; Figue, Jean; Nicolas, Henri
2000-05-01
This paper describes a new method for creating mosaic images from an original video and for computing a new sequence modifying some camera parameters like image size, scale factor, view angle... A mosaic image is a representation of the full scene observed by a moving camera during its displacement. It provides a wide angle of view of the scene from a sequence of images shot with a narrow angle of view camera. This paper proposes a method to create a virtual sequence from a calibrated original video and a rough 3D model of the scene. A 3D relationship between original and virtual images gives pixel correspondent in different images for a same 3D point in model scene. To texture the model with natural textures obtained in the original sequence, a criterion based on constraints related to the temporal variations of the background and 3D geometric considerations is used. Finally, in the presented method, the textured 3D model is used to recompute a new sequence of image with possibly different point of view and camera aperture angle. The algorithm is being proven with virtual sequences and, obtained results are encouraging up to now.
3D MPEG-2 video transmission over broadband network and broadcast channels
NASA Astrophysics Data System (ADS)
Gagnon, Gilles; Subramaniam, Suganthan; Vincent, Andre
2001-06-01
This paper explores the transmission of MPEG-2 compressed stereoscopic (3-D) video over broadband networks and digital television (DTV) broadcast channels. A system has been developed to perform 3-D (stereoscopic) MPEG-2 video encoding, transmission and decoding over broadband networks in real- time. Such a system can benefit applications where a depiction of the relative positions of objects in 3-dimensional space is critical, by providing visual cues along the sight axis. Applications such as tele-medicine, remote surveillance, tele- education, entertainment and others could benefit from such a system since it conveys an added viewing experience. For simplicity and cost efficiency the system is kept as simple as possible while offering a certain degree of control over the encoding and decoding platforms. Data exchange is done with TCP/IP for control between the server and client and with UDP/IP for the MPEG-2 transport streams delivered to the client. Parameters such as encoding rate can be set independently for the left and right viewing channels to satisfy network bandwidth restrictions, while maintaining satisfactory quality. Using this system, transmission of stereoscopic MPEG-2 transport streams (video and audio) has been performed over a 155 Mbps ATM network shared with other video transactions between server and clients. Preliminary results have shown that the system is reasonably robust to network impairments making it useable in relatively loaded networks. An innovative technique for broadcasting Standard Definition Television 3-D video using an ATSC compatible encoding and broadcasting system is also presented. This technique requires a simple video multiplexer before the ATSC encoding process, and a slight modification at the receiver after the ATSC decoding.
Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding
Wu, Yueying; Jia, Kebin; Gao, Guandong
2016-01-01
In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions. PMID:26999741
A perceptual quality metric for high-definition stereoscopic 3D video
NASA Astrophysics Data System (ADS)
Battisti, F.; Carli, M.; Stramacci, A.; Boev, A.; Gotchev, A.
2015-03-01
The use of 3D video is growing in several fields such as entertainment, military simulations, medical applications. However, the process of recording, transmitting, and processing 3D video is prone to errors thus producing artifacts that may affect the perceived quality. Nowadays a challenging task is the definition of a new metric able to predict the perceived quality with low computational complexity in order to be used in real-time applications. The research in this field is very active due to the complexity of the analysis of the influence of stereoscopic cues. In this paper we present a novel stereoscopic metric based on the combination of relevant features able to predict the subjective quality rating in a more accurate way.
3D video analysis of the novel object recognition test in rats.
Matsumoto, Jumpei; Uehara, Takashi; Urakawa, Susumu; Takamura, Yusaku; Sumiyoshi, Tomiki; Suzuki, Michio; Ono, Taketoshi; Nishijo, Hisao
2014-10-01
The novel object recognition (NOR) test has been widely used to test memory function. We developed a 3D computerized video analysis system that estimates nose contact with an object in Long Evans rats to analyze object exploration during NOR tests. The results indicate that the 3D system reproducibly and accurately scores the NOR test. Furthermore, the 3D system captures a 3D trajectory of the nose during object exploration, enabling detailed analyses of spatiotemporal patterns of object exploration. The 3D trajectory analysis revealed a specific pattern of object exploration in the sample phase of the NOR test: normal rats first explored the lower parts of objects and then gradually explored the upper parts. A systematic injection of MK-801 suppressed changes in these exploration patterns. The results, along with those of previous studies, suggest that the changes in the exploration patterns reflect neophobia to a novel object and/or changes from spatial learning to object learning. These results demonstrate that the 3D tracking system is useful not only for detailed scoring of animal behaviors but also for investigation of characteristic spatiotemporal patterns of object exploration. The system has the potential to facilitate future investigation of neural mechanisms underlying object exploration that result from dynamic and complex brain activity. PMID:24991752
Coarse integral holography approach for real 3D color video displays.
Chen, J S; Smithwick, Q Y J; Chu, D P
2016-03-21
A colour holographic display is considered the ultimate apparatus to provide the most natural 3D viewing experience. It encodes a 3D scene as holographic patterns that then are used to reproduce the optical wavefront. The main challenge at present is for the existing technologies to cope with the full information bandwidth required for the computation and display of holographic video. We have developed a dynamic coarse integral holography approach using opto-mechanical scanning, coarse integral optics and a low space-bandwidth-product high-bandwidth spatial light modulator to display dynamic holograms with a large space-bandwidth-product at video rates, combined with an efficient rendering algorithm to reduce the information content. This makes it possible to realise a full-parallax, colour holographic video display with a bandwidth of 10 billion pixels per second, and an adequate image size and viewing angle, as well as all relevant 3D cues. Our approach is scalable and the prototype can achieve even better performance with continuing advances in hardware components. PMID:27136858
Verification and Validation of the k-kL Turbulence Model in FUN3D and CFL3D Codes
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.; Carlson, Jan-Renee; Rumsey, Christopher L.
2015-01-01
The implementation of the k-kL turbulence model using multiple computational uid dy- namics (CFD) codes is reported herein. The k-kL model is a two-equation turbulence model based on Abdol-Hamid's closure and Menter's modi cation to Rotta's two-equation model. Rotta shows that a reliable transport equation can be formed from the turbulent length scale L, and the turbulent kinetic energy k. Rotta's equation is well suited for term-by-term mod- eling and displays useful features compared to other two-equation models. An important di erence is that this formulation leads to the inclusion of higher-order velocity derivatives in the source terms of the scale equations. This can enhance the ability of the Reynolds- averaged Navier-Stokes (RANS) solvers to simulate unsteady ows. The present report documents the formulation of the model as implemented in the CFD codes Fun3D and CFL3D. Methodology, veri cation and validation examples are shown. Attached and sepa- rated ow cases are documented and compared with experimental data. The results show generally very good comparisons with canonical and experimental data, as well as matching results code-to-code. The results from this formulation are similar or better than results using the SST turbulence model.
Video lensfree microscopy of 2D and 3D culture of cells
NASA Astrophysics Data System (ADS)
Allier, C. P.; Vinjimore Kesavan, S.; Coutard, J.-G.; Cioni, O.; Momey, F.; Navarro, F.; Menneteau, M.; Chalmond, B.; Obeid, P.; Haguet, V.; David-Watine, B.; Dubrulle, N.; Shorte, S.; van der Sanden, B.; Di Natale, C.; Hamard, L.; Wion, D.; Dolega, M. E.; Picollet-D'hahan, N.; Gidrol, X.; Dinten, J.-M.
2014-03-01
Innovative imaging methods are continuously developed to investigate the function of biological systems at the microscopic scale. As an alternative to advanced cell microscopy techniques, we are developing lensfree video microscopy that opens new ranges of capabilities, in particular at the mesoscopic level. Lensfree video microscopy allows the observation of a cell culture in an incubator over a very large field of view (24 mm2) for extended periods of time. As a result, a large set of comprehensive data can be gathered with strong statistics, both in space and time. Video lensfree microscopy can capture images of cells cultured in various physical environments. We emphasize on two different case studies: the quantitative analysis of the spontaneous network formation of HUVEC endothelial cells, and by coupling lensfree microscopy with 3D cell culture in the study of epithelial tissue morphogenesis. In summary, we demonstrate that lensfree video microscopy is a powerful tool to conduct cell assays in 2D and 3D culture experiments. The applications are in the realms of fundamental biology, tissue regeneration, drug development and toxicology studies.
3D Structured Grid Generation Codes for Turbomachinery
NASA Technical Reports Server (NTRS)
Loellbach, James; Tsung, Fu-Lin
1999-01-01
This report describes the research tasks during the past year. The research was mainly in the area of computational grid generation in support of CFD analyses of turbomachinery components. In addition to the grid generation work, a numerical simulation was obtained for the flow through a centrifugal gas compressor using an unstructured Navier-Stokes solver. Other tasks involved many different turbomachinery component analyses. These analyses were performed for NASA projects or for industrial applications. The work includes both centrifugal and axial machines, single and multiple blade rows, and steady and unsteady analyses. Over the past five years, a set of structured grid generation codes were developed that allow grids to be obtained fairly quickly for the large majority of configurations we encounter. These codes do not comprise a generalized grid generation package; they are noninteractive codes specifically designed for turbomachinery blade row geometries. But because of this limited scope, the codes are small, fast, and portable, and they can be run in the batch mode on small workstations. During the past year, these programs were used to generate computational grids were modified for a wide variety of configurations. In particular, the codes or wrote supplementary codes to improve our grid generation capabilities for multiple blade row configurations. This involves generating separate grids for each blade row, and then making them match and overlap by a few grid points at their common interface so that fluid properties are communicated across the interface. Unsteady rotor/stator analyses were performed for an axial turbine, a centrifugal compressor, and a centrifugal pump. Steady-state single-blade-row analyses were made for a study of blade sweep in transonic compressors. There was also cooperation on the application of an unstructured Navier-Stokes solver for turbomachinery flow simulations. In particular, the unstructured solver was used to analyze the
MOM3D method of moments code theory manual
NASA Astrophysics Data System (ADS)
Shaeffer, John F.
1992-03-01
MOM3D is a FORTRAN algorithm that solves Maxwell's equations as expressed via the electric field integral equation for the electromagnetic response of open or closed three dimensional surfaces modeled with triangle patches. Two joined triangles (couples) form the vector current unknowns for the surface. Boundary conditions are for perfectly conducting or resistive surfaces. The impedance matrix represents the fundamental electromagnetic interaction of the body with itself. A variety of electromagnetic analysis options are possible once the impedance matrix is computed including backscatter radar cross section (RCS), bistatic RCS, antenna pattern prediction for user specified body voltage excitation ports, RCS image projection showing RCS scattering center locations, surface currents excited on the body as induced by specified plane wave excitation, and near field computation for the electric field on or near the body.
MOM3D method of moments code theory manual
NASA Technical Reports Server (NTRS)
Shaeffer, John F.
1992-01-01
MOM3D is a FORTRAN algorithm that solves Maxwell's equations as expressed via the electric field integral equation for the electromagnetic response of open or closed three dimensional surfaces modeled with triangle patches. Two joined triangles (couples) form the vector current unknowns for the surface. Boundary conditions are for perfectly conducting or resistive surfaces. The impedance matrix represents the fundamental electromagnetic interaction of the body with itself. A variety of electromagnetic analysis options are possible once the impedance matrix is computed including backscatter radar cross section (RCS), bistatic RCS, antenna pattern prediction for user specified body voltage excitation ports, RCS image projection showing RCS scattering center locations, surface currents excited on the body as induced by specified plane wave excitation, and near field computation for the electric field on or near the body.
3D unstructured-mesh radiation transport codes
Morel, J.
1997-12-31
Three unstructured-mesh radiation transport codes are currently being developed at Los Alamos National Laboratory. The first code is ATTILA, which uses an unstructured tetrahedral mesh in conjunction with standard Sn (discrete-ordinates) angular discretization, standard multigroup energy discretization, and linear-discontinuous spatial differencing. ATTILA solves the standard first-order form of the transport equation using source iteration in conjunction with diffusion-synthetic acceleration of the within-group source iterations. DANTE is designed to run primarily on workstations. The second code is DANTE, which uses a hybrid finite-element mesh consisting of arbitrary combinations of hexahedra, wedges, pyramids, and tetrahedra. DANTE solves several second-order self-adjoint forms of the transport equation including the even-parity equation, the odd-parity equation, and a new equation called the self-adjoint angular flux equation. DANTE also offers three angular discretization options: $S{_}n$ (discrete-ordinates), $P{_}n$ (spherical harmonics), and $SP{_}n$ (simplified spherical harmonics). DANTE is designed to run primarily on massively parallel message-passing machines, such as the ASCI-Blue machines at LANL and LLNL. The third code is PERICLES, which uses the same hybrid finite-element mesh as DANTE, but solves the standard first-order form of the transport equation rather than a second-order self-adjoint form. DANTE uses a standard $S{_}n$ discretization in angle in conjunction with trilinear-discontinuous spatial differencing, and diffusion-synthetic acceleration of the within-group source iterations. PERICLES was initially designed to run on workstations, but a version for massively parallel message-passing machines will be built. The three codes will be described in detail and computational results will be presented.
FPGA Implementation of Optimal 3D-Integer DCT Structure for Video Compression
Jacob, J. Augustin; Kumar, N. Senthil
2015-01-01
A novel optimal structure for implementing 3D-integer discrete cosine transform (DCT) is presented by analyzing various integer approximation methods. The integer set with reduced mean squared error (MSE) and high coding efficiency are considered for implementation in FPGA. The proposed method proves that the least resources are utilized for the integer set that has shorter bit values. Optimal 3D-integer DCT structure is determined by analyzing the MSE, power dissipation, coding efficiency, and hardware complexity of different integer sets. The experimental results reveal that direct method of computing the 3D-integer DCT using the integer set [10, 9, 6, 2, 3, 1, 1] performs better when compared to other integer sets in terms of resource utilization and power dissipation. PMID:26601120
FPGA Implementation of Optimal 3D-Integer DCT Structure for Video Compression.
Jacob, J Augustin; Kumar, N Senthil
2015-01-01
A novel optimal structure for implementing 3D-integer discrete cosine transform (DCT) is presented by analyzing various integer approximation methods. The integer set with reduced mean squared error (MSE) and high coding efficiency are considered for implementation in FPGA. The proposed method proves that the least resources are utilized for the integer set that has shorter bit values. Optimal 3D-integer DCT structure is determined by analyzing the MSE, power dissipation, coding efficiency, and hardware complexity of different integer sets. The experimental results reveal that direct method of computing the 3D-integer DCT using the integer set [10, 9, 6, 2, 3, 1, 1] performs better when compared to other integer sets in terms of resource utilization and power dissipation. PMID:26601120
Numerical modelling of gravel unconstrained flow experiments with the DAN3D and RASH3D codes
NASA Astrophysics Data System (ADS)
Sauthier, Claire; Pirulli, Marina; Pisani, Gabriele; Scavia, Claudio; Labiouse, Vincent
2015-12-01
Landslide continuum dynamic models have improved considerably in the last years, but a consensus on the best method of calibrating the input resistance parameter values for predictive analyses has not yet emerged. In the present paper, numerical simulations of a series of laboratory experiments performed at the Laboratory for Rock Mechanics of the EPF Lausanne were undertaken with the RASH3D and DAN3D numerical codes. They aimed at analysing the possibility to use calibrated ranges of parameters (1) in a code different from that they were obtained from and (2) to simulate potential-events made of a material with the same characteristics as back-analysed past-events, but involving a different volume and propagation path. For this purpose, one of the four benchmark laboratory tests was used as past-event to calibrate the dynamic basal friction angle assuming a Coulomb-type behaviour of the sliding mass, and this back-analysed value was then used to simulate the three other experiments, assumed as potential-events. The computational findings show good correspondence with experimental results in terms of characteristics of the final deposits (i.e., runout, length and width). Furthermore, the obtained best fit values of the dynamic basal friction angle for the two codes turn out to be close to each other and within the range of values measured with pseudo-dynamic tilting tests.
3D visualization for the MARS14 Code
Rzepecki, Jaroslaw P.; Kostin, Mikhail A; Mokhov, Nikolai V.
2003-01-23
A new three-dimensional visualization engine has been developed for the MARS14 code system. It is based on the OPENINVENTOR graphics library and integrated with the MARS built-in two-dimensional Graphical-User Interface, MARS-GUI-SLICE. The integrated package allows thorough checking of complex geometry systems and their fragments, materials, magnetic fields, particle tracks along with a visualization of calculated 2-D histograms. The algorithms and their optimization are described for two geometry classes along with examples in accelerator and detector applications.
ROI-preserving 3D video compression method utilizing depth information
NASA Astrophysics Data System (ADS)
Ti, Chunli; Xu, Guodong; Guan, Yudong; Teng, Yidan
2015-09-01
Efficiently transmitting the extra information of three dimensional (3D) video is becoming a key issue of the development of 3DTV. 2D plus depth format not only occupies the smaller bandwidth and is compatible transmission under the condition of the existing channel, but also can provide technique support for advanced 3D video compression in some extend. This paper proposes an ROI-preserving compression scheme to further improve the visual quality at a limited bit rate. According to the connection between the focus of Human Visual System (HVS) and depth information, region of interest (ROI) can be automatically selected via depth map progressing. The main improvement from common method is that a meanshift based segmentation is executed to the depth map before foreground ROI selection to keep the integrity of scene. Besides, the sensitive areas along the edges are also protected. The Spatio-temporal filtering adapting to H.264 is used to the non-ROI of both 2D video and depth map before compression. Experiments indicate that, the ROI extracted by this method is more undamaged and according with subjective feeling, and the proposed method can keep the key high-frequency information more effectively while the bit rate is reduced.
Current loop coalescence studied by 3-D electromagnetic particle code
NASA Technical Reports Server (NTRS)
Nishikawa, Ken-Ichi; Sakai, Jun-Ichi; Koide, Shinji; Buneman, O.; Neubert, T.
1993-01-01
Solar flare plasma data from the Yohkoh satellite is analyzed. The interactions of current loops were observed in the active regions on the Sun. This observation pointed out the importance of the idea that the solar flare is generated by the coalescence of current loops. The three dimensional electromagnetic particle simulations are to help in understanding the global interaction between two current loops including the evolution of the twist of loops due to instabilities. Associated rapid dynamics of current loop coalescence such as reconnection, shock waves and associated kinetic processes such as energy transfer, acceleration of particles, and electromagnetic emissions are to be studied by the code to complement analytical theories and magnetohydrodynamic simulations of the current loop coalescence. The simulation results show the strong interactions between two current loops, beam and whistler instabilities, and associated parallel and perpendicular particle heating.
Extending ALE3D, an Arbitrarily Connected hexahedral 3D Code, to Very Large Problem Size (U)
Nichols, A L
2010-12-15
As the number of compute units increases on the ASC computers, the prospect of running previously unimaginably large problems is becoming a reality. In an arbitrarily connected 3D finite element code, like ALE3D, one must provide a unique identification number for every node, element, face, and edge. This is required for a number of reasons, including defining the global connectivity array required for domain decomposition, identifying appropriate communication patterns after domain decomposition, and determining the appropriate load locations for implicit solvers, for example. In most codes, the unique identification number is defined as a 32-bit integer. Thus the maximum value available is 231, or roughly 2.1 billion. For a 3D geometry consisting of arbitrarily connected hexahedral elements, there are approximately 3 faces for every element, and 3 edges for every node. Since the nodes and faces need id numbers, using 32-bit integers puts a hard limit on the number of elements in a problem at roughly 700 million. The first solution to this problem would be to replace 32-bit signed integers with 32-bit unsigned integers. This would increase the maximum size of a problem by a factor of 2. This provides some head room, but almost certainly not one that will last long. Another solution would be to replace all 32-bit int declarations with 64-bit long long declarations. (long is either a 32-bit or a 64-bit integer, depending on the OS). The problem with this approach is that there are only a few arrays that actually need to extended size, and thus this would increase the size of the problem unnecessarily. In a future computing environment where CPUs are abundant but memory relatively scarce, this is probably the wrong approach. Based on these considerations, we have chosen to replace only the global identifiers with the appropriate 64-bit integer. The problem with this approach is finding all the places where data that is specified as a 32-bit integer needs to be
NASA Astrophysics Data System (ADS)
Sargent, Dusty; Chen, Chao-I.; Wang, Yuan-Fang
2010-02-01
The paper reports a fully-automated, cross-modality sensor data registration scheme between video and magnetic tracker data. This registration scheme is intended for use in computerized imaging systems to model the appearance, structure, and dimension of human anatomy in three dimensions (3D) from endoscopic videos, particularly colonoscopic videos, for cancer research and clinical practices. The proposed cross-modality calibration procedure operates this way: Before a colonoscopic procedure, the surgeon inserts a magnetic tracker into the working channel of the endoscope or otherwise fixes the tracker's position on the scope. The surgeon then maneuvers the scope-tracker assembly to view a checkerboard calibration pattern from a few different viewpoints for a few seconds. The calibration procedure is then completed, and the relative pose (translation and rotation) between the reference frames of the magnetic tracker and the scope is determined. During the colonoscopic procedure, the readings from the magnetic tracker are used to automatically deduce the pose (both position and orientation) of the scope's reference frame over time, without complicated image analysis. Knowing the scope movement over time then allows us to infer the 3D appearance and structure of the organs and tissues in the scene. While there are other well-established mechanisms for inferring the movement of the camera (scope) from images, they are often sensitive to mistakes in image analysis, error accumulation, and structure deformation. The proposed method using a magnetic tracker to establish the camera motion parameters thus provides a robust and efficient alternative for 3D model construction. Furthermore, the calibration procedure does not require special training nor use expensive calibration equipment (except for a camera calibration pattern-a checkerboard pattern-that can be printed on any laser or inkjet printer).
3D Direct Simulation Monte Carlo Code Which Solves for Geometrics
1998-01-13
Pegasus is a 3D Direct Simulation Monte Carlo Code which solves for geometries which can be represented by bodies of revolution. Included are all the surface chemistry enhancements in the 2D code Icarus as well as a real vacuum pump model. The code includes multiple species transport.
PEGASUS. 3D Direct Simulation Monte Carlo Code Which Solves for Geometrics
Bartel, T.J.
1998-12-01
Pegasus is a 3D Direct Simulation Monte Carlo Code which solves for geometries which can be represented by bodies of revolution. Included are all the surface chemistry enhancements in the 2D code Icarus as well as a real vacuum pump model. The code includes multiple species transport.
Wall-touching kink mode calculations with the M3D code
Breslau, J. A. Bhattacharjee, A.
2015-06-15
This paper seeks to address a controversy regarding the applicability of the 3D nonlinear extended MHD code M3D [W. Park et al., Phys. Plasmas 6, 1796 (1999)] and similar codes to calculations of the electromagnetic interaction of a disrupting tokamak plasma with the surrounding vessel structures. M3D is applied to a simple test problem involving an external kink mode in an ideal cylindrical plasma, used also by the Disruption Simulation Code (DSC) as a model case for illustrating the nature of transient vessel currents during a major disruption. While comparison of the results with those of the DSC is complicated by effects arising from the higher dimensionality and complexity of M3D, we verify that M3D is capable of reproducing both the correct saturation behavior of the free boundary kink and the “Hiro” currents arising when the kink interacts with a conducting tile surface interior to the ideal wall.
Wall-touching kink mode calculations with the M3D code
NASA Astrophysics Data System (ADS)
Breslau, J. A.; Bhattacharjee, A.
2015-06-01
This paper seeks to address a controversy regarding the applicability of the 3D nonlinear extended MHD code M3D [W. Park et al., Phys. Plasmas 6, 1796 (1999)] and similar codes to calculations of the electromagnetic interaction of a disrupting tokamak plasma with the surrounding vessel structures. M3D is applied to a simple test problem involving an external kink mode in an ideal cylindrical plasma, used also by the Disruption Simulation Code (DSC) as a model case for illustrating the nature of transient vessel currents during a major disruption. While comparison of the results with those of the DSC is complicated by effects arising from the higher dimensionality and complexity of M3D, we verify that M3D is capable of reproducing both the correct saturation behavior of the free boundary kink and the "Hiro" currents arising when the kink interacts with a conducting tile surface interior to the ideal wall.
EM modeling for GPIR using 3D FDTD modeling codes
Nelson, S.D.
1994-10-01
An analysis of the one-, two-, and three-dimensional electrical characteristics of structural cement and concrete is presented. This work connects experimental efforts in characterizing cement and concrete in the frequency and time domains with the Finite Difference Time Domain (FDTD) modeling efforts of these substances. These efforts include Electromagnetic (EM) modeling of simple lossless homogeneous materials with aggregate and targets and the modeling dispersive and lossy materials with aggregate and complex target geometries for Ground Penetrating Imaging Radar (GPIR). Two- and three-dimensional FDTD codes (developed at LLNL) where used for the modeling efforts. Purpose of the experimental and modeling efforts is to gain knowledge about the electrical properties of concrete typically used in the construction industry for bridges and other load bearing structures. The goal is to optimize the performance of a high-sample-rate impulse radar and data acquisition system and to design an antenna system to match the characteristics of this material. Results show agreement to within 2 dB of the amplitudes of the experimental and modeled data while the frequency peaks correlate to within 10% the differences being due to the unknown exact nature of the aggregate placement.
Coding visual features extracted from video sequences.
Baroffio, Luca; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano
2014-05-01
Visual features are successfully exploited in several applications (e.g., visual search, object recognition and tracking, etc.) due to their ability to efficiently represent image content. Several visual analysis tasks require features to be transmitted over a bandwidth-limited network, thus calling for coding techniques to reduce the required bit budget, while attaining a target level of efficiency. In this paper, we propose, for the first time, a coding architecture designed for local features (e.g., SIFT, SURF) extracted from video sequences. To achieve high coding efficiency, we exploit both spatial and temporal redundancy by means of intraframe and interframe coding modes. In addition, we propose a coding mode decision based on rate-distortion optimization. The proposed coding scheme can be conveniently adopted to implement the analyze-then-compress (ATC) paradigm in the context of visual sensor networks. That is, sets of visual features are extracted from video frames, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast to the traditional compress-then-analyze (CTA) paradigm, in which video sequences acquired at a node are compressed and then sent to a central unit for further processing. In this paper, we compare these coding paradigms using metrics that are routinely adopted to evaluate the suitability of visual features in the context of content-based retrieval, object recognition, and tracking. Experimental results demonstrate that, thanks to the significant coding gains achieved by the proposed coding scheme, ATC outperforms CTA with respect to all evaluation metrics. PMID:24818244
Tactical 3D model generation using structure-from-motion on video from unmanned systems
NASA Astrophysics Data System (ADS)
Harguess, Josh; Bilinski, Mark; Nguyen, Kim B.; Powell, Darren
2015-05-01
Unmanned systems have been cited as one of the future enablers of all the services to assist the warfighter in dominating the battlespace. The potential benefits of unmanned systems are being closely investigated -- from providing increased and potentially stealthy surveillance, removing the warfighter from harms way, to reducing the manpower required to complete a specific job. In many instances, data obtained from an unmanned system is used sparingly, being applied only to the mission at hand. Other potential benefits to be gained from the data are overlooked and, after completion of the mission, the data is often discarded or lost. However, this data can be further exploited to offer tremendous tactical, operational, and strategic value. To show the potential value of this otherwise lost data, we designed a system that persistently stores the data in its original format from the unmanned vehicle and then generates a new, innovative data medium for further analysis. The system streams imagery and video from an unmanned system (original data format) and then constructs a 3D model (new data medium) using structure-from-motion. The 3D generated model provides warfighters additional situational awareness, tactical and strategic advantages that the original video stream lacks. We present our results using simulated unmanned vehicle data with Google Earth™providing the imagery as well as real-world data, including data captured from an unmanned aerial vehicle flight.
Fast prediction algorithm for multiview video coding
NASA Astrophysics Data System (ADS)
Abdelazim, Abdelrahman; Mein, Stephen James; Varley, Martin Roy; Ait-Boudaoud, Djamel
2013-03-01
The H.264/multiview video coding (MVC) standard has been developed to enable efficient coding for three-dimensional and multiple viewpoint video sequences. The inter-view statistical dependencies are utilized and an inter-view prediction is employed to provide more efficient coding; however, this increases the overall encoding complexity. Motion homogeneity is exploited here to selectively enable inter-view prediction, and to reduce complexity in the motion estimation (ME) and the mode selection processes. This has been accomplished by defining situations that relate macro-blocks' motion characteristics to the mode selection and the inter-view prediction processes. When comparing the proposed algorithm to the H.264/MVC reference software and other recent work, the experimental results demonstrate a significant reduction in ME time while maintaining similar rate-distortion performance.
MOEMS-based time-of-flight camera for 3D video capturing
NASA Astrophysics Data System (ADS)
You, Jang-Woo; Park, Yong-Hwa; Cho, Yong-Chul; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Lee, Seung-Wan
2013-03-01
We suggest a Time-of-Flight (TOF) video camera capturing real-time depth images (a.k.a depth map), which are generated from the fast-modulated IR images utilizing a novel MOEMS modulator having switching speed of 20 MHz. In general, 3 or 4 independent IR (e.g. 850nm) images are required to generate a single frame of depth image. Captured video image of a moving object frequently shows motion drag between sequentially captured IR images, which results in so called `motion blur' problem even when the frame rate of depth image is fast (e.g. 30 to 60 Hz). We propose a novel `single shot' TOF 3D camera architecture generating a single depth image out of synchronized captured IR images. The imaging system constitutes of 2x2 imaging lens array, MOEMS optical shutters (modulator) placed on each lens aperture and a standard CMOS image sensor. The IR light reflected from object is modulated by optical shutters on the apertures of 2x2 lens array and then transmitted images are captured on the image sensor resulting in 2x2 sub-IR images. As a result, the depth image is generated with those simultaneously captured 4 independent sub-IR images, hence the motion blur problem is canceled. The resulting performance is very useful in the applications of 3D camera to a human-machine interaction device such as user interface of TV, monitor, or hand held devices and motion capturing of human body. In addition, we show that the presented 3D camera can be modified to capture color together with depth image simultaneously on `single shot' frame rate.
Toward a 3D video format for auto-stereoscopic displays
NASA Astrophysics Data System (ADS)
Vetro, Anthony; Yea, Sehoon; Smolic, Aljoscha
2008-08-01
There has been increased momentum recently in the production of 3D content for cinema applications; for the most part, this has been limited to stereo content. There are also a variety of display technologies on the market that support 3DTV, each offering a different viewing experience and having different input requirements. More specifically, stereoscopic displays support stereo content and require glasses, while auto-stereoscopic displays avoid the need for glasses by rendering view-dependent stereo pairs for a multitude of viewing angles. To realize high quality auto-stereoscopic displays, multiple views of the video must either be provided as input to the display, or these views must be created locally at the display. The former approach has difficulties in that the production environment is typically limited to stereo, and transmission bandwidth for a large number of views is not likely to be available. This paper discusses an emerging 3D data format that enables the latter approach to be realized. A new framework for efficiently representing a 3D scene and enabling the reconstruction of an arbitrarily large number of views prior to rendering is introduced. Several design challenges are also highlighted through experimental results.
Monitoring an eruption fissure in 3D: video recording, particle image velocimetry and dynamics
NASA Astrophysics Data System (ADS)
Witt, Tanja; Walter, Thomas R.
2015-04-01
The processes during an eruption are very complex. To get a better understanding several parameters are measured. One of the measured parameters is the velocity of particles and patterns, as ash and emitted magma, and of the volcano itself. The resulting velocity field provides insights into the dynamics of a vent. Here we test our algorithm for 3 dimensional velocity fields on videos of the second fissure eruption of Bárdarbunga 2014. There we acquired videos from lava fountains of the main fissure with 2 high speed cameras with small angles between the cameras. Additionally we test the algorithm on videos from the geyser Strokkur, where we had 3 cameras and larger angles between the cameras. The velocity is calculated by a correlation in the Fourier space of contiguous images. Considering that we only have the velocity field of the surface smaller angles result in a better resolution of the existing velocity field in the near field. For general movements also larger angles can be useful, e.g. to get the direction, height and velocity of eruption clouds. In summary, it can be stated that 3D velocimetry can be used for several application and with different setup due to the application.
A new coding technique of digital hologram video based on view-point MCTF
NASA Astrophysics Data System (ADS)
Seo, Young-Ho; Choi, Hyun-Jun; Yoo, Ji-Sang; Kim, Dong-Wook
2006-10-01
In this paper, we proposed a new coding technique of digital hologram video using 3D scanning method and video compression technique. The proposed coding consists of capturing a digital hologram to separate into RGB color space components, localization by segmenting the fringe pattern, frequency transform using M×N (segment size) 2D DCT (2 Dimensional Discrete Cosine Transform) for extracting redundancy, 3D scan of segment to form a video sequence, motion compensated temporal filtering (MCTF) and modified video coding which uses H.264/AVC. The compressed digital hologram was reconstructed by both computer program and optic system. The proposed algorithm showed better properties after reconstruction with higher compression ratios than the previous researches.
3-D field computation: The near-triumph of commerical codes
Turner, L.R.
1995-07-01
In recent years, more and more of those who design and analyze magnets and other devices are using commercial codes rather than developing their own. This paper considers the commercial codes and the features available with them. Other recent trends with 3-D field computation include parallel computation and visualization methods such as virtual reality systems.
Analysis of EEG signals regularity in adults during video game play in 2D and 3D.
Khairuddin, Hamizah R; Malik, Aamir S; Mumtaz, Wajid; Kamel, Nidal; Xia, Likun
2013-01-01
Video games have long been part of the entertainment industry. Nonetheless, it is not well known how video games can affect us with the advancement of 3D technology. The purpose of this study is to investigate the EEG signals regularity when playing video games in 2D and 3D modes. A total of 29 healthy subjects (24 male, 5 female) with mean age of 21.79 (1.63) years participated. Subjects were asked to play a car racing video game in three different modes (2D, 3D passive and 3D active). In 3D passive mode, subjects needed to wear a passive polarized glasses (cinema type) while for 3D active, an active shutter glasses was used. Scalp EEG data was recorded during game play using 19-channel EEG machine and linked ear was used as reference. After data were pre-processed, the signal irregularity for all conditions was computed. Two parameters were used to measure signal complexity for time series data: i) Hjorth-Complexity and ii) Composite Permutation Entropy Index (CPEI). Based on these two parameters, our results showed that the complexity level increased from eyes closed to eyes open condition; and further increased in the case of 3D as compared to 2D game play. PMID:24110125
Development of the PARVMEC Code for Rapid Analysis of 3D MHD Equilibrium
NASA Astrophysics Data System (ADS)
Seal, Sudip; Hirshman, Steven; Cianciosa, Mark; Wingen, Andreas; Unterberg, Ezekiel; Wilcox, Robert; ORNL Collaboration
2015-11-01
The VMEC three-dimensional (3D) MHD equilibrium has been used extensively for designing stellarator experiments and analyzing experimental data in such strongly 3D systems. Recent applications of VMEC include 2D systems such as tokamaks (in particular, the D3D experiment), where application of very small (delB/B ~ 10-3) 3D resonant magnetic field perturbations render the underlying assumption of axisymmetry invalid. In order to facilitate the rapid analysis of such equilibria (for example, for reconstruction purposes), we have undertaken the task of parallelizing the VMEC code (PARVMEC) to produce a scalable and temporally rapidly convergent equilibrium code for use on parallel distributed memory platforms. The parallelization task naturally splits into three distinct parts 1) radial surfaces in the fixed-boundary part of the calculation; 2) two 2D angular meshes needed to compute the Green's function integrals over the plasma boundary for the free-boundary part of the code; and 3) block tridiagonal matrix needed to compute the full (3D) pre-conditioner near the final equilibrium state. Preliminary results show that scalability is achieved for tasks 1 and 3, with task 2 still nearing completion. The impact of this work on the rapid reconstruction of D3D plasmas using PARVMEC in the V3FIT code will be discussed. Work supported by U.S. DOE under Contract DE-AC05-00OR22725 with UT-Battelle, LLC.
Generalized parallelization methodology for video coding
NASA Astrophysics Data System (ADS)
Leung, Kwong-Keung; Yung, Nelson H. C.
1998-12-01
This paper describes a generalized parallelization methodology for mapping video coding algorithms onto a multiprocessing architecture, through systematic task decomposition, scheduling and performance analysis. It exploits data parallelism inherent in the coding process and performs task scheduling base on task data size and access locality with the aim to hide as much communication overhead as possible. Utilizing Petri-nets and task graphs for representation and analysis, the method enables parallel video frame capturing, buffering and encoding without extra communication overhead. The theoretical speedup analysis indicates that this method offers excellent communication hiding, resulting in system efficiency well above 90%. A H.261 video encoder has been implemented on a TMS320C80 system using this method, and its performance was measured. The theoretical and measured performances are similar in that the measured speedup of the H.261 is 3.67 and 3.76 on four PP for QCIF and 352 X 240 video, respectively. They correspond to frame rates of 30.7 frame per second (fps) and 9.25 fps, and system efficiency of 91.8% and 94% respectively. As it is, this method is particularly efficient for platforms with small number of parallel processors.
Scalable video coding in frequency domain
NASA Astrophysics Data System (ADS)
Civanlar, Mehmet R.; Puri, Atul
1992-11-01
Scalable video coding is important in a number of applications where video needs to be decoded and displayed at a variety of resolution scales. It is more efficient than simulcasting, in which all desired resolution scales are coded totally independent of one another within the constraint of a fixed available bandwidth. In this paper, we focus on scalability using the frequency domain approach. We employ the framework proposed for the ongoing second phase of Motion Picture Experts Group (MPEG-2) standard to study the performance of one such scheme and investigate improvements aimed at increasing its efficiency. Practical issues related to multiplexing of encoded data of various resolution scales to facilitate decoding are considered. Simulations are performed to investigate the potential of a chosen frequency domain scheme. Various prospects and limitations are also discussed.
INS3D: An incompressible Navier-Stokes code in generalized three-dimensional coordinates
NASA Technical Reports Server (NTRS)
Rogers, S. E.; Kwak, D.; Chang, J. L. C.
1987-01-01
The operation of the INS3D code, which computes steady-state solutions to the incompressible Navier-Stokes equations, is described. The flow solver utilizes a pseudocompressibility approach combined with an approximate factorization scheme. This manual describes key operating features to orient new users. This includes the organization of the code, description of the input parameters, description of each subroutine, and sample problems. Details for more extended operations, including possible code modifications, are given in the appendix.
Dettmer, Simon L; Keyser, Ulrich F; Pagliara, Stefano
2014-02-01
In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of these methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces. PMID:24593372
NASA Astrophysics Data System (ADS)
Dettmer, Simon L.; Keyser, Ulrich F.; Pagliara, Stefano
2014-02-01
In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of these methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces.
Dettmer, Simon L.; Keyser, Ulrich F.; Pagliara, Stefano
2014-02-15
In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of these methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces.
Variable disparity-motion estimation based fast three-view video coding
NASA Astrophysics Data System (ADS)
Bae, Kyung-Hoon; Kim, Seung-Cheol; Hwang, Yong Seok; Kim, Eun-Soo
2009-02-01
In this paper, variable disparity-motion estimation (VDME) based 3-view video coding is proposed. In the encoding, key-frame coding (KFC) based motion estimation and variable disparity estimation (VDE) for effectively fast three-view video encoding are processed. These proposed algorithms enhance the performance of 3-D video encoding/decoding system in terms of accuracy of disparity estimation and computational overhead. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm's PSNRs is 37.66 and 40.55 dB, and the processing time is 0.139 and 0.124 sec/frame, respectively.
A unified and efficient framework for court-net sports video analysis using 3D camera modeling
NASA Astrophysics Data System (ADS)
Han, Jungong; de With, Peter H. N.
2007-01-01
The extensive amount of video data stored on available media (hard and optical disks) necessitates video content analysis, which is a cornerstone for different user-friendly applications, such as, smart video retrieval and intelligent video summarization. This paper aims at finding a unified and efficient framework for court-net sports video analysis. We concentrate on techniques that are generally applicable for more than one sports type to come to a unified approach. To this end, our framework employs the concept of multi-level analysis, where a novel 3-D camera modeling is utilized to bridge the gap between the object-level and the scene-level analysis. The new 3-D camera modeling is based on collecting features points from two planes, which are perpendicular to each other, so that a true 3-D reference is obtained. Another important contribution is a new tracking algorithm for the objects (i.e. players). The algorithm can track up to four players simultaneously. The complete system contributes to summarization by various forms of information, of which the most important are the moving trajectory and real-speed of each player, as well as 3-D height information of objects and the semantic event segments in a game. We illustrate the performance of the proposed system by evaluating it for a variety of court-net sports videos containing badminton, tennis and volleyball, and we show that the feature detection performance is above 92% and events detection about 90%.
User's manual for PELE3D: a computer code for three-dimensional incompressible fluid dynamics
McMaster, W H
1982-05-07
The PELE3D code is a three-dimensional semi-implicit Eulerian hydrodynamics computer program for the solution of incompressible fluid flow coupled to a structure. The fluid and coupling algorithms have been adapted from the previously developed two-dimensional code PELE-IC. The PELE3D code is written in both plane and cylindrical coordinates. The coupling algorithm is general enough to handle a variety of structural shapes. The free surface algorithm is able to accommodate a top surface and several independent bubbles. The code is in a developmental status since all the intended options have not been fully implemented and tested. Development of this code ended in 1980 upon termination of the contract with the Nuclear Regulatory Commission.
Three-dimensional parallel UNIPIC-3D code for simulations of high-power microwave devices
NASA Astrophysics Data System (ADS)
Wang, Jianguo; Chen, Zaigao; Wang, Yue; Zhang, Dianhui; Liu, Chunliang; Li, Yongdong; Wang, Hongguang; Qiao, Hailiang; Fu, Meiyan; Yuan, Yuan
2010-07-01
This paper introduces a self-developed, three-dimensional parallel fully electromagnetic particle simulation code UNIPIC-3D. In this code, the electromagnetic fields are updated using the second-order, finite-difference time-domain method, and the particles are moved using the relativistic Newton-Lorentz force equation. The electromagnetic field and particles are coupled through the current term in Maxwell's equations. Two numerical examples are used to verify the algorithms adopted in this code, numerical results agree well with theoretical ones. This code can be used to simulate the high-power microwave (HPM) devices, such as the relativistic backward wave oscillator, coaxial vircator, and magnetically insulated line oscillator, etc. UNIPIC-3D is written in the object-oriented C++ language and can be run on a variety of platforms including WINDOWS, LINUX, and UNIX. Users can use the graphical user's interface to create the complex geometric structures of the simulated HPM devices, which can be automatically meshed by UNIPIC-3D code. This code has a powerful postprocessor which can display the electric field, magnetic field, current, voltage, power, spectrum, momentum of particles, etc. For the sake of comparison, the results computed by using the two-and-a-half-dimensional UNIPIC code are also provided for the same parameters of HPM devices, the numerical results computed from these two codes agree well with each other.
Rendering-oriented multiview video coding based on chrominance information reconstruction
NASA Astrophysics Data System (ADS)
Shao, Feng; Yu, Mei; Jiang, Gangyi; Zhang, Zhaoyang
2010-05-01
Three-dimensional (3-D) video systems are expected to be a next-generation visual application. Since multiview video for 3-D video systems is composed of color and associated depth information, its huge requirement for data storage and transmission is an important problem. We propose a rendering-oriented multiview video coding (MVC) method based on chrominance information reconstruction that incorporates the rendering technique into the MVC process. The proposed method discards certain chrominance information to reduce bitrates, and performs reasonable bitrate allocation between color and depth videos. At the decoder, a chrominance reconstruction algorithm is presented to achieve accurate reconstruction by warping the neighboring views and colorizing the luminance-only pixels. Experimental results show that the proposed method can save nearly 20% on bitrates against the results without discarding the chrominance information. Moreover, under a fixed bitrate budget, the proposed method can greatly improve the rendering quality.
A quality assessment of 3D video analysis for full scale rockfall experiments
NASA Astrophysics Data System (ADS)
Volkwein, A.; Glover, J.; Bourrier, F.; Gerber, W.
2012-04-01
Main goal of full scale rockfall experiments is to retrieve a 3D trajectory of a boulder along the slope. Such trajectories then can be used to calibrate rockfall simulation models. This contribution presents the application of video analysis techniques capturing rock fall velocity of some free fall full scale rockfall experiments along a rock face with an inclination of about 50 degrees. Different scaling methodologies have been evaluated. They mainly differ in the way the scaling factors between the movie frames and the reality and are determined. For this purpose some scale bars and targets with known dimensions have been distributed in advance along the slope. The single scaling approaches are briefly described as follows: (i) Image raster is scaled to the distant fixed scale bar then recalibrated to the plane of the passing rock boulder by taking the measured position of the nearest impact as the distance to the camera. The distance between the camera, scale bar, and passing boulder are surveyed. (ii) The image raster was scaled using the four nearest targets (identified using frontal video) from the trajectory to be analyzed. The average of the scaling factors was finally taken as scaling factor. (iii) The image raster was scaled using the four nearest targets from the trajectory to be analyzed. The scaling factor for one trajectory was calculated by balancing the mean scaling factors associated with the two nearest and the two farthest targets in relation to their mean distance to the analyzed trajectory. (iv) Same as previous method but with varying scaling factors during along the trajectory. It has shown that a direct measure of the scaling target and nearest impact zone is the most accurate. If constant plane is assumed it doesn't account for the lateral deviations of the rock boulder from the fall line consequently adding error into the analysis. Thus a combination of scaling methods (i) and (iv) are considered to give the best results. For best results
Multitasking the INS3D-LU code on the Cray Y-MP
NASA Technical Reports Server (NTRS)
Fatoohi, Rod; Yoon, Seokkwan
1991-01-01
This paper presents the results of multitasking the INS3D-LU code on eight processors. The code is a full Navier-Stokes solver for incompressible fluid in three dimensional generalized coordinates using a lower-upper symmetric-Gauss-Seidel implicit scheme. This code has been fully vectorized on oblique planes of sweep and parallelized using autotasking with some directives and minor modifications. The timing results for five grid sizes are presented and analyzed. The code has achieved a processing rate of over one Gflops.
RELAP5-3D Code for Supercritical-Pressure Light-Water-Cooled Reactors
Riemke, Richard Allan; Davis, Cliff Bybee; Schultz, Richard Raphael
2003-04-01
The RELAP5-3D computer program has been improved for analysis of supercritical-pressure, light-water-cooled reactors. Several code modifications were implemented to correct code execution failures. Changes were made to the steam table generation, steam table interpolation, metastable states, interfacial heat transfer coefficients, and transport properties (viscosity and thermal conductivity). The code modifications now allow the code to run slow transients above the critical pressure as well as blowdown transients (modified Edwards pipe and modified existing pressurized water reactor model) that pass near the critical point.
Overview of MPEG internet video coding
NASA Astrophysics Data System (ADS)
Wang, R. G.; Li, G.; Park, S.; Kim, J.; Huang, T.; Jang, E. S.; Gao, W.
2015-09-01
MPEG has produced standards that have provided the industry with the best video compression technologies. In order to address the diversified needs of the Internet, MPEG issued the Call for Proposals (CfP) for internet video coding in July, 2011. It is anticipated that any patent declaration associated with the Baseline Profile of this standard will indicate that the patent owner is prepared to grant a free of charge license to an unrestricted number of applicants on a worldwide, non-discriminatory basis and under other reasonable terms and conditions to make, use, and sell implementations of the Baseline Profile of this standard in accordance with the ITU-T/ITU-R/ISO/IEC Common Patent Policy. Three different codecs had responded to the CfP, which are WVC, VCB and IVC. WVC was proposed jointly by Apple, Cisco, Fraunhofer HHI, Magnum Semiconductor, Polycom and RIM etc. it's in fact AVC baseline. VCB was proposed by Google, and it's in fact VP8. IVC was proposed by several Universities (Peking University, Tsinghua University, Zhejiang University, Hanyang University and Korea Aerospace University etc.) and its coding tools was developed from Zero. In this paper, we give an overview of the coding tools in IVC, and evaluate its performance by comparing it with WVC, VCB and AVC High Profile.
Description of a parallel, 3D, finite element, hydrodynamics-diffusion code
Milovich, J L; Prasad, M K; Shestakov, A I
1999-04-11
We describe a parallel, 3D, unstructured grid finite element, hydrodynamic diffusion code for inertial confinement fusion (ICF) applications and the ancillary software used to run it. The code system is divided into two entities, a controller and a stand-alone physics code. The code system may reside on different computers; the controller on the user's workstation and the physics code on a supercomputer. The physics code is composed of separate hydrodynamic, equation-of-state, laser energy deposition, heat conduction, and radiation transport packages and is parallelized for distributed memory architectures. For parallelization, a SPMD model is adopted; the domain is decomposed into a disjoint collection of subdomains, one per processing element (PE). The PEs communicate using MPI. The code is used to simulate the hydrodynamic implosion of a spherical bubble.
Planet-Disk Interaction on the GPU: The FARGO3D code
NASA Astrophysics Data System (ADS)
Masset, F. S.; Benítez-Llambay, P.
2015-10-01
We present the new code FARGO3D. It is a finite difference code that solves the equations of hydrodynamics or magnetohydrodynamics on a Cartesian, cylindrical or spherical mesh. It features orbital advection, conserves mass and (angular) momentum to machine accuracy. Special emphasis is put on the description of planet disk tidal interactions. It is parallelized with MPI, and it can run indistinctly on CPUs or GPUs, without the need to program in a GPU oriented language.
Turbomachinery Heat Transfer and Loss Modeling for 3D Navier-Stokes Codes
NASA Technical Reports Server (NTRS)
DeWitt, Kenneth; Ameri, Ali
2005-01-01
This report's contents focus on making use of NASA Glenn on-site computational facilities,to develop, validate, and apply models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes to enhance the capability to compute heat transfer and losses in turbomachiney.
3D Neutron Transport PWR Full-core Calculation with RMC code
NASA Astrophysics Data System (ADS)
Qiu, Yishu; She, Ding; Fan, Xiao; Wang, Kan; Li, Zeguang; Liang, Jingang; Leroyer, Hadrien
2014-06-01
Nowadays, there are more and more interests in the use of Monte Carlo codes to calculate the detailed power density distributions in full-core reactors. With the Inspur TS1000 HPC Server of Tsinghua University, several calculations have been done based on the EDF 3D Neutron Transport PWR Full-core benchmark through large-scale parallelism. To investigate and compare the results of the deterministic method and Monte Carlo method, EDF R&D and Department of Engineering Physics of Tsinghua University are having a collaboration to make code to code verification. So in this paper, two codes are used. One is the code COCAGNE developed by the EDF R&D, a deterministic core code, and the other is the Monte Carlo code RMC developed by Department of Engineering Physics in Tsinghua University. First, the full-core model is described and a 26-group calculation was performed by these two codes using the same 26-group cross-section library provided by EDF R&D. Then the parallel and tally performance of RMC is discussed. RMC employs a novel algorithm which can cut down most of the communications. It can be seen clearly that the speedup ratio almost linearly increases with the nodes. Furthermore the cell-mapping method applied by RMC consumes little time to tally even millions of cells. The results of the codes COCAGNE and RMC are compared in three ways. The results of these two codes agree well with each other. It can be concluded that both COCAGNE and RMC are able to provide 3D-transport solutions associated with detailed power density distributions calculation in PWR full-core reactors. Finally, to investigate how many histories are needed to obtain a given standard deviation for a full 3D solution, the non-symmetrized condensed 2-group fluxes of RMC are discussed.
Users manual for the NASA Lewis three-dimensional ice accretion code (LEWICE 3D)
NASA Technical Reports Server (NTRS)
Bidwell, Colin S.; Potapczuk, Mark G.
1993-01-01
A description of the methodology, the algorithms, and the input and output data along with an example case for the NASA Lewis 3D ice accretion code (LEWICE3D) has been produced. The manual has been designed to help the user understand the capabilities, the methodologies, and the use of the code. The LEWICE3D code is a conglomeration of several codes for the purpose of calculating ice shapes on three-dimensional external surfaces. A three-dimensional external flow panel code is incorporated which has the capability of calculating flow about arbitrary 3D lifting and nonlifting bodies with external flow. A fourth order Runge-Kutta integration scheme is used to calculate arbitrary streamlines. An Adams type predictor-corrector trajectory integration scheme has been included to calculate arbitrary trajectories. Schemes for calculating tangent trajectories, collection efficiencies, and concentration factors for arbitrary regions of interest for single droplets or droplet distributions have been incorporated. A LEWICE 2D based heat transfer algorithm can be used to calculate ice accretions along surface streamlines. A geometry modification scheme is incorporated which calculates the new geometry based on the ice accretions generated at each section of interest. The three-dimensional ice accretion calculation is based on the LEWICE 2D calculation. Both codes calculate the flow, pressure distribution, and collection efficiency distribution along surface streamlines. For both codes the heat transfer calculation is divided into two regions, one above the stagnation point and one below the stagnation point, and solved for each region assuming a flat plate with pressure distribution. Water is assumed to follow the surface streamlines, hence starting at the stagnation zone any water that is not frozen out at a control volume is assumed to run back into the next control volume. After the amount of frozen water at each control volume has been calculated the geometry is modified by
Implementation of a kappa-epsilon turbulence model to RPLUS3D code
NASA Technical Reports Server (NTRS)
Chitsomboon, Tawit
1992-01-01
The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.
A new 3-D integral code for computation of accelerator magnets
Turner, L.R.; Kettunen, L.
1991-01-01
For computing accelerator magnets, integral codes have several advantages over finite element codes; far-field boundaries are treated automatically, and computed field in the bore region satisfy Maxwell's equations exactly. A new integral code employing edge elements rather than nodal elements has overcome the difficulties associated with earlier integral codes. By the use of field integrals (potential differences) as solution variables, the number of unknowns is reduced to one less than the number of nodes. Two examples, a hollow iron sphere and the dipole magnet of Advanced Photon Source injector synchrotron, show the capability of the code. The CPU time requirements are comparable to those of three-dimensional (3-D) finite-element codes. Experiments show that in practice it can realize much of the potential CPU time saving that parallel processing makes possible. 8 refs., 4 figs., 1 tab.
NASA Astrophysics Data System (ADS)
Yaron, Avi; Bar-Zohar, Meir; Horesh, Nadav
2007-02-01
Sophisticated surgeries require the integration of several medical imaging modalities, like MRI and CT, which are three-dimensional. Many efforts are invested in providing the surgeon with this information in an intuitive & easy to use manner. A notable development, made by Visionsense, enables the surgeon to visualize the scene in 3D using a miniature stereoscopic camera. It also provides real-time 3D measurements that allow registration of navigation systems as well as 3D imaging modalities, overlaying these images on the stereoscopic video image in real-time. The real-time MIS 'see through tissue' fusion solutions enable the development of new MIS procedures in various surgical segments, such as spine, abdomen, cardio-thoracic and brain. This paper describes 3D surface reconstruction and registration methods using Visionsense camera, as a step toward fully automated multi-modality 3D registration.
RELAP5-3D Code Includes Athena Features and Models
Richard A. Riemke; Cliff B. Davis; Richard R. Schultz
2006-07-01
Version 2.3 of the RELAP5-3D computer program includes all features and models previously available only in the ATHENA version of the code. These include the addition of new working fluids (i.e., ammonia, blood, carbon dioxide, glycerol, helium, hydrogen, lead-bismuth, lithium, lithium-lead, nitrogen, potassium, sodium, and sodium-potassium) and a magnetohydrodynamic model that expands the capability of the code to model many more thermal-hydraulic systems. In addition to the new working fluids along with the standard working fluid water, one or more noncondensable gases (e.g., air, argon, carbon dioxide, carbon monoxide, helium, hydrogen, krypton, nitrogen, oxygen, sf6, xenon) can be specified as part of the vapor/gas phase of the working fluid. These noncondensable gases were in previous versions of RELAP5- 3D. Recently four molten salts have been added as working fluids to RELAP5-3D Version 2.4, which has had limited release. These molten salts will be in RELAP5-3D Version 2.5, which will have a general release like RELAP5-3D Version 2.3. Applications that use these new features and models are discussed in this paper.
RELAP5-3D Code Includes ATHENA Features and Models
Riemke, Richard A.; Davis, Cliff B.; Schultz, Richard R.
2006-07-01
Version 2.3 of the RELAP5-3D computer program includes all features and models previously available only in the ATHENA version of the code. These include the addition of new working fluids (i.e., ammonia, blood, carbon dioxide, glycerol, helium, hydrogen, lead-bismuth, lithium, lithium-lead, nitrogen, potassium, sodium, and sodium-potassium) and a magnetohydrodynamic model that expands the capability of the code to model many more thermal-hydraulic systems. In addition to the new working fluids along with the standard working fluid water, one or more noncondensable gases (e.g., air, argon, carbon dioxide, carbon monoxide, helium, hydrogen, krypton, nitrogen, oxygen, SF{sub 6}, xenon) can be specified as part of the vapor/gas phase of the working fluid. These noncondensable gases were in previous versions of RELAP5-3D. Recently four molten salts have been added as working fluids to RELAP5-3D Version 2.4, which has had limited release. These molten salts will be in RELAP5-3D Version 2.5, which will have a general release like RELAP5-3D Version 2.3. Applications that use these new features and models are discussed in this paper. (authors)
Edge Transport Modeling using the 3D EMC3-Eirene code on Tokamaks and Stellarators
NASA Astrophysics Data System (ADS)
Lore, J. D.; Ahn, J. W.; Briesemeister, A.; Ferraro, N.; Labombard, B.; McLean, A.; Reinke, M.; Shafer, M.; Terry, J.
2015-11-01
The fluid plasma edge transport code EMC3-Eirene has been applied to aid data interpretation and understanding the results of experiments with 3D effects on several tokamaks. These include applied and intrinsic 3D magnetic fields, 3D plasma facing components, and toroidally and poloidally localized heat and particle sources. On Alcator C-Mod, a series of experiments explored the impact of toroidally and poloidally localized impurity gas injection on core confinement and asymmetries in the divertor fluxes, with the differences between the asymmetry in L-mode and H-mode qualitatively reproduced in the simulations due to changes in the impurity ionization in the private flux region. Modeling of NSTX experiments on the effect of 3D fields on detachment matched the trend of a higher density at which the detachment occurs when 3D fields are applied. On DIII-D, different magnetic field models were used in the simulation and compared against the 2D Thomson scattering diagnostic. In simulating each device different aspects of the code model are tested pointing to areas where the model must be further developed. The application to stellarator experiments will also be discussed. Work supported by U.S. DOE: DE-AC05-00OR22725, DE AC02-09CH11466, DE-FC02-99ER54512, and DE-FC02-04ER54698.
ATHENA 3D: A finite element code for ultrasonic wave propagation
NASA Astrophysics Data System (ADS)
Rose, C.; Rupin, F.; Fouquet, T.; Chassignole, B.
2014-04-01
The understanding of wave propagation phenomena requires use of robust numerical models. 3D finite element (FE) models are generally prohibitively time consuming. However, advances in computing processor speed and memory allow them to be more and more competitive. In this context, EDF R&D developed the 3D version of the well-validated FE code ATHENA2D. The code is dedicated to the simulation of wave propagation in all kinds of elastic media and in particular, heterogeneous and anisotropic materials like welds. It is based on solving elastodynamic equations in the calculation zone expressed in terms of stress and particle velocities. The particularity of the code relies on the fact that the discretization of the calculation domain uses a Cartesian regular 3D mesh while the defect of complex geometry can be described using a separate (2D) mesh using the fictitious domains method. This allows combining the rapidity of regular meshes computation with the capability of modelling arbitrary shaped defects. Furthermore, the calculation domain is discretized with a quasi-explicit time evolution scheme. Thereby only local linear systems of small size have to be solved. The final step to reduce the computation time relies on the fact that ATHENA3D has been parallelized and adapted to the use of HPC resources. In this paper, the validation of the 3D FE model is discussed. A cross-validation of ATHENA 3D and CIVA is proposed for several inspection configurations. The performances in terms of calculation time are also presented in the cases of both local computer and computation cluster use.
The emerging High Efficiency Video Coding standard (HEVC)
NASA Astrophysics Data System (ADS)
Raja, Gulistan; Khan, Awais
2013-12-01
High definition video (HDV) is becoming popular day by day. This paper describes the performance analysis of latest upcoming video standard known as High Efficiency Video Coding (HEVC). HEVC is designed to fulfil all the requirements for future high definition videos. In this paper, three configurations (intra only, low delay and random access) of HEVC are analyzed using various 480p, 720p and 1080p high definition test video sequences. Simulation results show the superior objective and subjective quality of HEVC.
Development of Unsteady Aerodynamic and Aeroelastic Reduced-Order Models Using the FUN3D Code
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Vatsa, Veer N.; Biedron, Robert T.
2009-01-01
Recent significant improvements to the development of CFD-based unsteady aerodynamic reduced-order models (ROMs) are implemented into the FUN3D unstructured flow solver. These improvements include the simultaneous excitation of the structural modes of the CFD-based unsteady aerodynamic system via a single CFD solution, minimization of the error between the full CFD and the ROM unsteady aero- dynamic solution, and computation of a root locus plot of the aeroelastic ROM. Results are presented for a viscous version of the two-dimensional Benchmark Active Controls Technology (BACT) model and an inviscid version of the AGARD 445.6 aeroelastic wing using the FUN3D code.
Bit allocation for joint coding of multiple video programs
NASA Astrophysics Data System (ADS)
Wang, Limin; Vincent, Andre
1997-01-01
By dynamically distributing the channel capacity among video programs according to their respective scene complexities, joint coding has been a shown to be more efficient than independent coding for compression of multiple video programs. This paper examines the bit allocation issue for joint coding of multiple video programs and provides a bit allocation strategy that results in uniform picture quality among programs as will as within a program.
Ui, Atsushi; Miyaji, Takamasa
2004-10-15
The best-estimate coupled three-dimensional (3-D) core and thermal-hydraulic code system TRAC-BF1/COS3D has been developed. COS3D, based on a modified one-group neutronic model, is a 3-D core simulator used for licensing analyses and core management of commercial boiling water reactor (BWR) plants in Japan. TRAC-BF1 is a plant simulator based on a two-fluid model. TRAC-BF1/COS3D is a coupled system of both codes, which are connected using a parallel computing tool. This code system was applied to the OECD/NRC BWR Turbine Trip Benchmark. Since the two-group cross-section tables are provided by the benchmark team, COS3D was modified to apply to this specification. Three best-estimate scenarios and four hypothetical scenarios were calculated using this code system. In the best-estimate scenario, the predicted core power with TRAC-BF1/COS3D is slightly underestimated compared with the measured data. The reason seems to be a slight difference in the core boundary conditions, that is, pressure changes and the core inlet flow distribution, because the peak in this analysis is sensitive to them. However, the results of this benchmark analysis show that TRAC-BF1/COS3D gives good precision for the prediction of the actual BWR transient behavior on the whole. Furthermore, the results with the modified one-group model and the two-group model were compared to verify the application of the modified one-group model to this benchmark. This comparison shows that the results of the modified one-group model are appropriate and sufficiently precise.
Li, Yong Gang; Yang, Yang; Short, Michael P.; Ding, Ze Jun; Zeng, Zhi; Li, Ju
2015-01-01
SRIM-like codes have limitations in describing general 3D geometries, for modeling radiation displacements and damage in nanostructured materials. A universal, computationally efficient and massively parallel 3D Monte Carlo code, IM3D, has been developed with excellent parallel scaling performance. IM3D is based on fast indexing of scattering integrals and the SRIM stopping power database, and allows the user a choice of Constructive Solid Geometry (CSG) or Finite Element Triangle Mesh (FETM) method for constructing 3D shapes and microstructures. For 2D films and multilayers, IM3D perfectly reproduces SRIM results, and can be ∼102 times faster in serial execution and > 104 times faster using parallel computation. For 3D problems, it provides a fast approach for analyzing the spatial distributions of primary displacements and defect generation under ion irradiation. Herein we also provide a detailed discussion of our open-source collision cascade physics engine, revealing the true meaning and limitations of the “Quick Kinchin-Pease” and “Full Cascades” options. The issues of femtosecond to picosecond timescales in defining displacement versus damage, the limitation of the displacements per atom (DPA) unit in quantifying radiation damage (such as inadequacy in quantifying degree of chemical mixing), are discussed. PMID:26658477
NASA Astrophysics Data System (ADS)
Li, Yong Gang; Yang, Yang; Short, Michael P.; Ding, Ze Jun; Zeng, Zhi; Li, Ju
2015-12-01
SRIM-like codes have limitations in describing general 3D geometries, for modeling radiation displacements and damage in nanostructured materials. A universal, computationally efficient and massively parallel 3D Monte Carlo code, IM3D, has been developed with excellent parallel scaling performance. IM3D is based on fast indexing of scattering integrals and the SRIM stopping power database, and allows the user a choice of Constructive Solid Geometry (CSG) or Finite Element Triangle Mesh (FETM) method for constructing 3D shapes and microstructures. For 2D films and multilayers, IM3D perfectly reproduces SRIM results, and can be ∼102 times faster in serial execution and > 104 times faster using parallel computation. For 3D problems, it provides a fast approach for analyzing the spatial distributions of primary displacements and defect generation under ion irradiation. Herein we also provide a detailed discussion of our open-source collision cascade physics engine, revealing the true meaning and limitations of the “Quick Kinchin-Pease” and “Full Cascades” options. The issues of femtosecond to picosecond timescales in defining displacement versus damage, the limitation of the displacements per atom (DPA) unit in quantifying radiation damage (such as inadequacy in quantifying degree of chemical mixing), are discussed.
Equation-of-State Test Suite for the DYNA3D Code
Benjamin, Russell D.
2015-11-05
This document describes the creation and implementation of a test suite for the Equationof- State models in the DYNA3D code. A customized input deck has been created for each model, as well as a script that extracts the relevant data from the high-speed edit file created by DYNA3D. Each equation-of-state model is broken apart and individual elements of the model are tested, as well as testing the entire model. The input deck for each model is described and the results of the tests are discussed. The intent of this work is to add this test suite to the validation suite presently used for DYNA3D.
Vay, J.-L.; Furman, M.A.; Azevedo, A.W.; Cohen, R.H.; Friedman, A.; Grote, D.P.; Stoltz, P.H.
2004-04-19
We have integrated the electron-cloud code POSINST [1] with WARP [2]--a 3-D parallel Particle-In-Cell accelerator code developed for Heavy Ion Inertial Fusion--so that the two can interoperate. Both codes are run in the same process, communicate through a Python interpreter (already used in WARP), and share certain key arrays (so far, particle positions and velocities). Currently, POSINST provides primary and secondary sources of electrons, beam bunch kicks, a particle mover, and diagnostics. WARP provides the field solvers and diagnostics. Secondary emission routines are provided by the Tech-X package CMEE.
Wall touching kink mode calculations with the M3D code
NASA Astrophysics Data System (ADS)
Breslau, J. A.
2014-10-01
In recent years there have been a number of results published concerning the transient vessel currents and forces occurring during a tokamak VDE, as predicted by simulations with the nonlinear MHD code M3D. The nature of the simulations is such that these currents and forces occur at the boundary of the computational domain, making the proper choice of boundary conditions critical to the reliability of the results. The M3D boundary condition includes the prescription that the normal component of the velocity vanish at the wall. It has been argued that this prescription invalidates the calculations because it would seem to rule out the possibility of advection of plasma surface currents into the wall. This claim has been tested by applying M3D to an idealized case - a kink-unstable plasma column - in order to abstract the essential physics from the complications involved in the attempt to model real devices. While comparison of the results is complicated by effects arising from the higher dimensionality and complexity of M3D, we have verified that M3D is capable of reproducing both the correct saturation behavior of the free boundary kink and the ``Hiro'' currents arising when the kink interacts with a conducting tile surface interior to the ideal wall.
NASA Astrophysics Data System (ADS)
Bada, Adedayo; Wang, Qi; Alcaraz-Calero, Jose M.; Grecos, Christos
2016-04-01
This paper proposes a new approach to improving the application of 3D video rendering and streaming by jointly exploring and optimizing both cloud-based virtualization and web-based delivery. The proposed web service architecture firstly establishes a software virtualization layer based on QEMU (Quick Emulator), an open-source virtualization software that has been able to virtualize system components except for 3D rendering, which is still in its infancy. The architecture then explores the cloud environment to boost the speed of the rendering at the QEMU software virtualization layer. The capabilities and inherent limitations of Virgil 3D, which is one of the most advanced 3D virtual Graphics Processing Unit (GPU) available, are analyzed through benchmarking experiments and integrated into the architecture to further speed up the rendering. Experimental results are reported and analyzed to demonstrate the benefits of the proposed approach.
Preliminary results of 3D dose calculations with MCNP-4B code from a SPECT image.
Rodríguez Gual, M; Lima, F F; Sospedra Alfonso, R; González González, J; Calderón Marín, C
2004-01-01
Interface software was developed to generate the input file to run Monte Carlo MCNP-4B code from medical image in Interfile format version 3.3. The software was tested using a spherical phantom of tomography slides with known cumulated activity distribution in Interfile format generated with IMAGAMMA medical image processing system. The 3D dose calculation obtained with Monte Carlo MCNP-4B code was compared with the voxel S factor method. The results show a relative error between both methods less than 1 %. PMID:15625058
Simulations of implosions with a 3D, parallel, unstructured-grid, radiation-hydrodynamics code
Kaiser, T B; Milovich, J L; Prasad, M K; Rathkopf, J; Shestakov, A I
1998-12-28
An unstructured-grid, radiation-hydrodynamics code is used to simulate implosions. Although most of the problems are spherically symmetric, they are run on 3D, unstructured grids in order to test the code's ability to maintain spherical symmetry of the converging waves. Three problems, of increasing complexity, are presented. In the first, a cold, spherical, ideal gas bubble is imploded by an enclosing high pressure source. For the second, we add non-linear heat conduction and drive the implosion with twelve laser beams centered on the vertices of an icosahedron. In the third problem, a NIF capsule is driven with a Planckian radiation source.
An analysis of brightness as a factor in visual discomfort caused by watching stereoscopic 3D video
NASA Astrophysics Data System (ADS)
Kim, Yong-Woo; Kang, Hang-Bong
2015-05-01
Even though various research has examined the factors that cause visual discomfort in watching stereoscopic 3D video, the brightness factor has not been dealt with sufficiently. In this paper, we analyze visual discomfort under various illumination conditions by considering eye-blinking rate and saccadic eye movement. In addition, we measure the perceived depth before and after watching 3D stereoscopic video by using our own 3D depth measurement instruments. Our test sequences consist of six illumination conditions for background. The illumination is changed from bright to dark or vice-versa, while the illumination of the foreground object is constant. Our test procedure is as follows: First, the subjects are rested until a baseline of no visual discomfort is established. Then, the subjects answer six questions to check their subjective pre-stimulus discomfort level. Next, we measure perceived depth for each subject, and the subjects watch 30-minute stereoscopic 3D or 2D video clips in random order. We measured eye-blinking and saccadic movements of the subject using an eye-tracking device. Then, we measured perceived depth for each subject again to detect any changes in depth perception. We also checked the subject's post-stimulus discomfort level, and measured the perceived depth after a 40-minute post-experiment resting period to measure recovery levels. After 40 minutes, most subjects returned to normal levels of depth perception. From our experiments, we found that eye-blinking rates were higher with a dark to light video progression than vice-versa. Saccadic eye movements were a lower with a dark to light video progression than viceversa.
A semi-automatic 2D-to-3D video conversion with adaptive key-frame selection
NASA Astrophysics Data System (ADS)
Ju, Kuanyu; Xiong, Hongkai
2014-11-01
To compensate the deficit of 3D content, 2D to 3D video conversion (2D-to-3D) has recently attracted more attention from both industrial and academic communities. The semi-automatic 2D-to-3D conversion which estimates corresponding depth of non-key-frames through key-frames is more desirable owing to its advantage of balancing labor cost and 3D effects. The location of key-frames plays a role on quality of depth propagation. This paper proposes a semi-automatic 2D-to-3D scheme with adaptive key-frame selection to keep temporal continuity more reliable and reduce the depth propagation errors caused by occlusion. The potential key-frames would be localized in terms of clustered color variation and motion intensity. The distance of key-frame interval is also taken into account to keep the accumulated propagation errors under control and guarantee minimal user interaction. Once their depth maps are aligned with user interaction, the non-key-frames depth maps would be automatically propagated by shifted bilateral filtering. Considering that depth of objects may change due to the objects motion or camera zoom in/out effect, a bi-directional depth propagation scheme is adopted where a non-key frame is interpolated from two adjacent key frames. The experimental results show that the proposed scheme has better performance than existing 2D-to-3D scheme with fixed key-frame interval.
The Transient 3-D Transport Coupled Code TORT-TD/ATTICA3D for High-Fidelity Pebble-Bed HTGR Analyses
NASA Astrophysics Data System (ADS)
Seubert, Armin; Sureda, Antonio; Lapins, Janis; Bader, Johannes; Laurien, Eckart
2012-01-01
This article describes the 3D discrete ordinates-based coupled code system TORT-TD/ATTICA3D that aims at steady state and transient analyses of pebble-bed high-temperature gas cooled reactors. In view of increasing computing power, the application of time-dependent neutron transport methods becomes feasible for best estimate evaluations of safety margins. The calculation capabilities of TORT-TD/ATTICA3D are presented along with the coupling approach, with focus on the time-dependent neutron transport features of TORT-TD. Results obtained for the OECD/NEA/NSC PBMR-400 benchmark demonstrate the transient capabilities of TORT-TD/ATTICA3D.
User Guide for the R5EXEC Coupling Interface in the RELAP5-3D Code
Forsmann, J. Hope; Weaver, Walter L.
2015-04-01
This report describes the R5EXEC coupling interface in the RELAP5-3D computer code from the users perspective. The information in the report is intended for users who want to couple RELAP5-3D to other thermal-hydraulic, neutron kinetics, or control system simulation codes.
Development of a GPU-Accelerated 3-D Full-Wave Code for Reflectometry Simulations
NASA Astrophysics Data System (ADS)
Reuther, K. S.; Kubota, S.; Feibush, E.; Johnson, I.
2013-10-01
1-D and 2-D full-wave codes used as synthetic diagnostics in microwave reflectometry are standard tools for understanding electron density fluctuations in fusion plasmas. The accuracy of the code depends on how well the wave properties along the ignored dimensions can be pre-specified or neglected. In a toroidal magnetic geometry, such assumptions are never strictly correct and ray tracing has shown that beam propagation is inherently a 3-D problem. Previously, we reported on the application of GPGPU's (General-Purpose computing on Graphics Processing Units) to a 2-D FDTD (Finite-Difference Time-Domain) code ported to utilize the parallel processing capabilities of the NVIDIA C870 and C1060. Here, we report on the development of a FDTD code for 3-D problems. Initial tests will use NVIDIA's M2070 GPU and concentrate on the launching and propagation of Gaussian beams in free space. If available, results using a plasma target will also be presented. Performance will be compared with previous generations of GPGPU cards as well as with NVIDIA's newest K20C GPU. Finally, the possibility of utilizing multiple GPGPU cards in a cluster environment or in a single node will also be discussed. Supported by U.S. DoE Grants DE-FG02-99-ER54527 and DE-AC02-09CH11466 and the DoE National Undergraduate Fusion Fellowship.
Parameterized code SHARM-3D for radiative transfer over inhomogeneous surfaces
NASA Astrophysics Data System (ADS)
Lyapustin, Alexei; Wang, Yujie
2005-12-01
The code SHARM-3D, developed for fast and accurate simulations of the monochromatic radiance at the top of the atmosphere over spatially variable surfaces with Lambertian or anisotropic reflectance, is described. The atmosphere is assumed to be laterally uniform across the image and to consist of two layers with aerosols contained in the bottom layer. The SHARM-3D code performs simultaneous calculations for all specified incidence-view geometries and multiple wavelengths in one run. The numerical efficiency of the current version of code is close to its potential limit and is achieved by means of two innovations. The first is the development of a comprehensive precomputed lookup table of the three-dimensional atmospheric optical transfer function for various atmospheric conditions. The second is the use of a linear kernel model of the land surface bidirectional reflectance factor (BRF) in our algorithm that has led to a fully parameterized solution in terms of the surface BRF parameters. The code is also able to model inland lakes and rivers. The water pixels are described with the Nakajima-Tanaka BRF model of wind-roughened water surface with a Lambertian offset, which is designed to model approximately the reflectance of suspended matter and of a shallow lake or river bottom.
A 3-D Vortex Code for Parachute Flow Predictions: VIPAR Version 1.0
STRICKLAND, JAMES H.; HOMICZ, GREGORY F.; PORTER, VICKI L.; GOSSLER, ALBERT A.
2002-07-01
This report describes a 3-D fluid mechanics code for predicting flow past bluff bodies whose surfaces can be assumed to be made up of shell elements that are simply connected. Version 1.0 of the VIPAR code (Vortex Inflation PARachute code) is described herein. This version contains several first order algorithms that we are in the process of replacing with higher order ones. These enhancements will appear in the next version of VIPAR. The present code contains a motion generator that can be used to produce a large class of rigid body motions. The present code has also been fully coupled to a structural dynamics code in which the geometry undergoes large time dependent deformations. Initial surface geometry is generated from triangular shell elements using a code such as Patran and is written into an ExodusII database file for subsequent input into VIPAR. Surface and wake variable information is output into two ExodusII files that can be post processed and viewed using software such as EnSight{trademark}.
PRONTO3D users` instructions: A transient dynamic code for nonlinear structural analysis
Attaway, S.W.; Mello, F.J.; Heinstein, M.W.; Swegle, J.W.; Ratner, J.A.; Zadoks, R.I.
1998-06-01
This report provides an updated set of users` instructions for PRONTO3D. PRONTO3D is a three-dimensional, transient, solid dynamics code for analyzing large deformations of highly nonlinear materials subjected to extremely high strain rates. This Lagrangian finite element program uses an explicit time integration operator to integrate the equations of motion. Eight-node, uniform strain, hexahedral elements and four-node, quadrilateral, uniform strain shells are used in the finite element formulation. An adaptive time step control algorithm is used to improve stability and performance in plasticity problems. Hourglass distortions can be eliminated without disturbing the finite element solution using either the Flanagan-Belytschko hourglass control scheme or an assumed strain hourglass control scheme. All constitutive models in PRONTO3D are cast in an unrotated configuration defined using the rotation determined from the polar decomposition of the deformation gradient. A robust contact algorithm allows for the impact and interaction of deforming contact surfaces of quite general geometry. The Smooth Particle Hydrodynamics method has been embedded into PRONTO3D using the contact algorithm to couple it with the finite element method.
A 3d particle simulation code for heavy ion fusion accelerator studies
Friedman, A.; Bangerter, R.O.; Callahan, D.A.; Grote, D.P.; Langdon, A.B. ); Haber, I. )
1990-06-08
We describe WARP, a new particle-in-cell code being developed and optimized for ion beam studies in true geometry. We seek to model transport around bends, axial compression with strong focusing, multiple beamlet interaction, and other inherently 3d processes that affect emittance growth. Constraints imposed by memory and running time are severe. Thus, we employ only two 3d field arrays ({rho} and {phi}), and difference {phi} directly on each particle to get E, rather than interpolating E from three meshes; use of a single 3d array is feasible. A new method for PIC simulation of bent beams follows the beam particles in a family of rotated laboratory frames, thus straightening'' the bends. We are also incorporating an envelope calculation, an (r, z) model, and 1d (axial) model within WARP. The BASIS development and run-time system is used, providing a powerful interactive environment in which the user has access to all variables in the code database. 10 refs., 3 figs.
Spacecraft charging analysis with the implicit particle-in-cell code iPic3D
Deca, J.; Lapenta, G.; Marchand, R.; Markidis, S.
2013-10-15
We present the first results on the analysis of spacecraft charging with the implicit particle-in-cell code iPic3D, designed for running on massively parallel supercomputers. The numerical algorithm is presented, highlighting the implementation of the electrostatic solver and the immersed boundary algorithm; the latter which creates the possibility to handle complex spacecraft geometries. As a first step in the verification process, a comparison is made between the floating potential obtained with iPic3D and with Orbital Motion Limited theory for a spherical particle in a uniform stationary plasma. Second, the numerical model is verified for a CubeSat benchmark by comparing simulation results with those of PTetra for space environment conditions with increasing levels of complexity. In particular, we consider spacecraft charging from plasma particle collection, photoelectron and secondary electron emission. The influence of a background magnetic field on the floating potential profile near the spacecraft is also considered. Although the numerical approaches in iPic3D and PTetra are rather different, good agreement is found between the two models, raising the level of confidence in both codes to predict and evaluate the complex plasma environment around spacecraft.
A new multimodal interactive way of subjective scoring of 3D video quality of experience
NASA Astrophysics Data System (ADS)
Kim, Taewan; Lee, Kwanghyun; Lee, Sanghoon; Bovik, Alan C.
2014-03-01
People that watch today's 3D visual programs, such as 3D cinema, 3D TV and 3D games, experience wide and dynamically varying ranges of 3D visual immersion and 3D quality of experience (QoE). It is necessary to be able to deploy reliable methodologies that measure each viewers subjective experience. We propose a new methodology that we call Multimodal Interactive Continuous Scoring of Quality (MICSQ). MICSQ is composed of a device interaction process between the 3D display and a separate device (PC, tablet, etc.) used as an assessment tool, and a human interaction process between the subject(s) and the device. The scoring process is multimodal, using aural and tactile cues to help engage and focus the subject(s) on their tasks. Moreover, the wireless device interaction process makes it possible for multiple subjects to assess 3D QoE simultaneously in a large space such as a movie theater, and at di®erent visual angles and distances.
GPU-accelerated 3D neutron diffusion code based on finite difference method
Xu, Q.; Yu, G.; Wang, K.
2012-07-01
Finite difference method, as a traditional numerical solution to neutron diffusion equation, although considered simpler and more precise than the coarse mesh nodal methods, has a bottle neck to be widely applied caused by the huge memory and unendurable computation time it requires. In recent years, the concept of General-Purpose computation on GPUs has provided us with a powerful computational engine for scientific research. In this study, a GPU-Accelerated multi-group 3D neutron diffusion code based on finite difference method was developed. First, a clean-sheet neutron diffusion code (3DFD-CPU) was written in C++ on the CPU architecture, and later ported to GPUs under NVIDIA's CUDA platform (3DFD-GPU). The IAEA 3D PWR benchmark problem was calculated in the numerical test, where three different codes, including the original CPU-based sequential code, the HYPRE (High Performance Pre-conditioners)-based diffusion code and CITATION, were used as counterpoints to test the efficiency and accuracy of the GPU-based program. The results demonstrate both high efficiency and adequate accuracy of the GPU implementation for neutron diffusion equation. A speedup factor of about 46 times was obtained, using NVIDIA's Geforce GTX470 GPU card against a 2.50 GHz Intel Quad Q9300 CPU processor. Compared with the HYPRE-based code performing in parallel on an 8-core tower server, the speedup of about 2 still could be observed. More encouragingly, without any mathematical acceleration technology, the GPU implementation ran about 5 times faster than CITATION which was speeded up by using the SOR method and Chebyshev extrapolation technique. (authors)
FURN3D: A computer code for radiative heat transfer in pulverized coal furnaces
Ahluwalia, R.K.; Im, K.H.
1992-08-01
A computer code FURN3D has been developed for assessing the impact of burning different coals on heat absorption pattern in pulverized coal furnaces. The code is unique in its ability to conduct detailed spectral calculations of radiation transport in furnaces fully accounting for the size distributions of char, soot and ash particles, ash content, and ash composition. The code uses a hybrid technique of solving the three-dimensional radiation transport equation for absorbing, emitting and anisotropically scattering media. The technique achieves an optimal mix of computational speed and accuracy by combining the discrete ordinate method (S[sub 4]), modified differential approximation (MDA) and P, approximation in different range of optical thicknesses. The code uses spectroscopic data for estimating the absorption coefficients of participating gases C0[sub 2], H[sub 2]0 and CO. It invokes Mie theory for determining the extinction and scattering coefficients of combustion particulates. The optical constants of char, soot and ash are obtained from dispersion relations derived from reflectivity, transmissivity and extinction measurements. A control-volume formulation is adopted for determining the temperature field inside the furnace. A simple char burnout model is employed for estimating heat release and evolution of particle size distribution. The code is written in Fortran 77, has modular form, and is machine-independent. The computer memory required by the code depends upon the number of grid points specified and whether the transport calculations are performed on spectral or gray basis.
FURN3D: A computer code for radiative heat transfer in pulverized coal furnaces
Ahluwalia, R.K.; Im, K.H.
1992-08-01
A computer code FURN3D has been developed for assessing the impact of burning different coals on heat absorption pattern in pulverized coal furnaces. The code is unique in its ability to conduct detailed spectral calculations of radiation transport in furnaces fully accounting for the size distributions of char, soot and ash particles, ash content, and ash composition. The code uses a hybrid technique of solving the three-dimensional radiation transport equation for absorbing, emitting and anisotropically scattering media. The technique achieves an optimal mix of computational speed and accuracy by combining the discrete ordinate method (S{sub 4}), modified differential approximation (MDA) and P, approximation in different range of optical thicknesses. The code uses spectroscopic data for estimating the absorption coefficients of participating gases C0{sub 2}, H{sub 2}0 and CO. It invokes Mie theory for determining the extinction and scattering coefficients of combustion particulates. The optical constants of char, soot and ash are obtained from dispersion relations derived from reflectivity, transmissivity and extinction measurements. A control-volume formulation is adopted for determining the temperature field inside the furnace. A simple char burnout model is employed for estimating heat release and evolution of particle size distribution. The code is written in Fortran 77, has modular form, and is machine-independent. The computer memory required by the code depends upon the number of grid points specified and whether the transport calculations are performed on spectral or gray basis.
Validation of CATHARE 3D Code Against UPTF TRAM C3 Transients
NASA Astrophysics Data System (ADS)
Glantz, Tony; Freitas, Roberto
Within the nuclear reactor safety analysis, one of the events that could potentially lead to a re-criticality accident in case of a Small Break Loss of Coolant Accident (SBLOCA) in a Pressurized Water Reactor (PWR) is a boron dilution scenario followed by a coolant mixing transient. Some UPTF experiments can be interpreted as generic boron dilution experiments. In fact, the UPTF experiments were originally designed to conduct separate effects studies focused on multi-dimensional thermal hydraulic phenomena. However, in the case of experimental program TRAM, some studies are realized on the boron mixing: tests C3. Some of these tests have been used for the validation and assessment of the 3D module of CATHARE code. Results are very satisfying; CATHARE 3D code is able to reproduce correctly the main features of the UPTF TRAM C3 tests, the temperature mixing in the cold leg, the formation of a strong stratification in the upper downcomer, the perfect mixing temperature in the lower downcomer and the strong stratification in the lower plenum. These results are also compared with the CFX5 and TRIO-U codes results on these tests.
3D deformable organ model based liver motion tracking in ultrasound videos
NASA Astrophysics Data System (ADS)
Kim, Jung-Bae; Hwang, Youngkyoo; Oh, Young-Taek; Bang, Won-Chul; Lee, Heesae; Kim, James D. K.; Kim, Chang Yeong
2013-03-01
This paper presents a novel method of using 2D ultrasound (US) cine images during image-guided therapy to accurately track the 3D position of a tumor even when the organ of interest is in motion due to patient respiration. Tracking is possible thanks to a 3D deformable organ model we have developed. The method consists of three processes in succession. The first process is organ modeling where we generate a personalized 3D organ model from high quality 3D CT or MR data sets captured during three different respiratory phases. The model includes the organ surface, vessel and tumor, which can all deform and move in accord with patient respiration. The second process is registration of the organ model to 3D US images. From 133 respiratory phase candidates generated from the deformable organ model, we resolve the candidate that best matches the 3D US images according to vessel centerline and surface. As a result, we can determine the position of the US probe. The final process is real-time tracking using 2D US cine images captured by the US probe. We determine the respiratory phase by tracking the diaphragm on the image. The 3D model is then deformed according to respiration phase and is fitted to the image by considering the positions of the vessels. The tumor's 3D positions are then inferred based on respiration phase. Testing our method on real patient data, we have found the accuracy of 3D position is within 3.79mm and processing time is 5.4ms during tracking.
Spatial parallelism of a 3D finite difference, velocity-stress elastic wave propagation code
Minkoff, S.E.
1999-12-01
Finite difference methods for solving the wave equation more accurately capture the physics of waves propagating through the earth than asymptotic solution methods. Unfortunately, finite difference simulations for 3D elastic wave propagation are expensive. The authors model waves in a 3D isotropic elastic earth. The wave equation solution consists of three velocity components and six stresses. The partial derivatives are discretized using 2nd-order in time and 4th-order in space staggered finite difference operators. Staggered schemes allow one to obtain additional accuracy (via centered finite differences) without requiring additional storage. The serial code is most unique in its ability to model a number of different types of seismic sources. The parallel implementation uses the MPI library, thus allowing for portability between platforms. Spatial parallelism provides a highly efficient strategy for parallelizing finite difference simulations. In this implementation, one can decompose the global problem domain into one-, two-, and three-dimensional processor decompositions with 3D decompositions generally producing the best parallel speedup. Because I/O is handled largely outside of the time-step loop (the most expensive part of the simulation) the authors have opted for straight-forward broadcast and reduce operations to handle I/O. The majority of the communication in the code consists of passing subdomain face information to neighboring processors for use as ghost cells. When this communication is balanced against computation by allocating subdomains of reasonable size, they observe excellent scaled speedup. Allocating subdomains of size 25 x 25 x 25 on each node, they achieve efficiencies of 94% on 128 processors. Numerical examples for both a layered earth model and a homogeneous medium with a high-velocity blocky inclusion illustrate the accuracy of the parallel code.
Spatial Parallelism of a 3D Finite Difference, Velocity-Stress Elastic Wave Propagation Code
MINKOFF,SUSAN E.
1999-12-09
Finite difference methods for solving the wave equation more accurately capture the physics of waves propagating through the earth than asymptotic solution methods. Unfortunately. finite difference simulations for 3D elastic wave propagation are expensive. We model waves in a 3D isotropic elastic earth. The wave equation solution consists of three velocity components and six stresses. The partial derivatives are discretized using 2nd-order in time and 4th-order in space staggered finite difference operators. Staggered schemes allow one to obtain additional accuracy (via centered finite differences) without requiring additional storage. The serial code is most unique in its ability to model a number of different types of seismic sources. The parallel implementation uses the MP1 library, thus allowing for portability between platforms. Spatial parallelism provides a highly efficient strategy for parallelizing finite difference simulations. In this implementation, one can decompose the global problem domain into one-, two-, and three-dimensional processor decompositions with 3D decompositions generally producing the best parallel speed up. Because i/o is handled largely outside of the time-step loop (the most expensive part of the simulation) we have opted for straight-forward broadcast and reduce operations to handle i/o. The majority of the communication in the code consists of passing subdomain face information to neighboring processors for use as ''ghost cells''. When this communication is balanced against computation by allocating subdomains of reasonable size, we observe excellent scaled speed up. Allocating subdomains of size 25 x 25 x 25 on each node, we achieve efficiencies of 94% on 128 processors. Numerical examples for both a layered earth model and a homogeneous medium with a high-velocity blocky inclusion illustrate the accuracy of the parallel code.
An easy implementation of displacement calculations in 3D discrete dislocation dynamics codes
NASA Astrophysics Data System (ADS)
Fivel, Marc; Depres, Christophe
2014-10-01
Barnett's coordinate-free expression of the displacement field of a triangular loop in an isotropic media is revisited in a view to be implemented in 3D discrete dislocation dynamics codes. A general meshing procedure solving the problems of non-planar loops is presented. The method is user-friendly and can be used in numerical simulations since it gives the contribution of each dislocation segment to the global displacement field without defining the connectivity of closed loops. Easy to implement in parallel calculations, this method is successfully applied to large-scale simulations.
3D and 4D Simulations of the Dynamics of the Radiation Belts using VERB code
NASA Astrophysics Data System (ADS)
Shprits, Yuri; Kellerman, Adam; Drozdov, Alexander; Orlova, Ksenia
2015-04-01
Modeling and understanding of ring current and higher energy radiation belts has been a grand challenge since the beginning of the space age. In this study we show long term simulations with a 3D VERB code of modeling the radiation belts with boundary conditions derived from observations around geosynchronous orbit. We also present 4D VERB simulations that include convective transport, radial diffusion, pitch angle scattering and local acceleration. We show that while lower energy radial transport is dominated by the convection and higher energy transport is dominated by the diffusive radial transport. We also show there exists an intermediate range of energies for electrons for which both processes work simultaneously.
FDFD: A 3D Finite-Difference Frequency-Domain Code for Electromagnetic Induction Tomography
NASA Astrophysics Data System (ADS)
Champagne, Nathan J.; Berryman, James G.; Buettner, H. Michael
2001-07-01
A new 3D code for electromagnetic induction tomography with intended applications to environmental imaging problems has been developed. The approach consists of calculating the fields within a volume using an implicit finite-difference frequency-domain formulation. The volume is terminated by an anisotropic perfectly matched layer region that simulates an infinite domain by absorbing outgoing waves. Extensive validation of this code has been done using analytical and semianalytical results from other codes, and some of those results are presented in this paper. The new code is written in Fortran 90 and is designed to be easily parallelized. Finally, an adjoint field method of data inversion, developed in parallel for solving the fully nonlinear inverse problem for electrical conductivity imaging (e.g., for mapping underground conducting plumes), uses this code to provide solvers for both forward and adjoint fields. Results obtained from this inversion method for high-contrast media are encouraging and provide a significant improvement over those obtained from linearized inversion methods.
Automated design of coupled RF cavities using 2-D and 3-D codes
Smith, Peter; Christiansen, D. W.; Greninger, P. T.; Spalek, G.
2001-01-01
Coupled RF cavities in the Accelerator Production of Tritium Project have been designed using a procedure in which a 2-D code (CCT) searches for a design that meets frequency and coupling requirements, while a 3-D code (HFSS) is used to obtain empirical factors used by CCT to characterize the coupling slot between cavities. Using assumed values of the empirical factors, CCT runs the Superfish code iteratively to solve for a trial cavity design that has a specified frequency and coupling. The frequency shifts and the coupling constant k of the slot are modeled in CCT using a perturbation theory, the results of which are adjusted using the empirical factors. Given a trial design, HFSS is run using periodic boundary conditions to obtain a mode spectrum. The mode spectrum is processed using the DISPER code to obtain values of the coupling and the frequencies with slots. These results are used to calculate a new set of empirical factors, which are fed back into CCT for another design iteration. Cold models have been fabricated and tested to validate the codes, and results will be presented.
NASA Astrophysics Data System (ADS)
Krebs, Isabel; Jardin, Stephen C.; Igochine, Valentin; Guenter, Sibylle; Hoelzl, Matthias; ASDEX Upgrade Team
2014-10-01
We study sawtooth reconnection in ASDEX Upgrade tokamak plasmas by means of 3D non-linear two-fluid MHD simulations in toroidal geometry using the high-order finite element code M3D-C1. Parameters and equilibrium of the simulations are based on typical sawtoothing ASDEX Upgrade discharges. The simulation results are compared to features of the experimental observations such as the sawtooth crash time and frequency, the evolution of the safety factor profile and the 3D evolution of the temperature. 2D ECE imaging measurements during sawtooth crashes in ASDEX Upgrade indicate that the heat is transported out of the core through a narrow poloidally localized region. We investigate if incomplete sawtooth reconnection can be seen in the simulations which is suggested by soft X-ray tomography measurements in ASDEX Upgrade showing that an (m = 1, n = 1) perturbation is typically observed to survive the sawtooth crash and approximately maintain its radial position.
Implementation of the 3D edge plasma code EMC3-EIRENE on NSTX
Lore, J. D.; Canik, J. M.; Feng, Y.; Ahn, J. -W.; Maingi, R.; Soukhanovskii, V.
2012-05-09
The 3D edge transport code EMC3-EIRENE has been applied for the first time to the NSTX spherical tokamak. A new disconnected double null grid has been developed to allow the simulation of plasma where the radial separation of the inner and outer separatrix is less than characteristic widths (e.g. heat flux width) at the midplane. Modelling results are presented for both an axisymmetric case and a case where 3D magnetic field is applied in an n = 3 configuration. In the vacuum approximation, the perturbed field consists of a wide region of destroyed flux surfaces and helical lobes which are a mixture of long and short connection length field lines formed by the separatrix manifolds. This structure is reflected in coupled 3D plasma fluid (EMC3) and kinetic neutral particle (EIRENE) simulations. The helical lobes extending inside of the unperturbed separatrix are filled in by hot plasma from the core. The intersection of the lobes with the divertor results in a striated flux footprint pattern on the target plates. As a result, profiles of divertor heat and particle fluxes are compared with experimental data, and possible sources of discrepancy are discussed.
Implementation of the 3D edge plasma code EMC3-EIRENE on NSTX
Lore, J. D.; Canik, J. M.; Feng, Y.; Ahn, J. -W.; Maingi, R.; Soukhanovskii, V.
2012-05-09
The 3D edge transport code EMC3-EIRENE has been applied for the first time to the NSTX spherical tokamak. A new disconnected double null grid has been developed to allow the simulation of plasma where the radial separation of the inner and outer separatrix is less than characteristic widths (e.g. heat flux width) at the midplane. Modelling results are presented for both an axisymmetric case and a case where 3D magnetic field is applied in an n = 3 configuration. In the vacuum approximation, the perturbed field consists of a wide region of destroyed flux surfaces and helical lobes which aremore » a mixture of long and short connection length field lines formed by the separatrix manifolds. This structure is reflected in coupled 3D plasma fluid (EMC3) and kinetic neutral particle (EIRENE) simulations. The helical lobes extending inside of the unperturbed separatrix are filled in by hot plasma from the core. The intersection of the lobes with the divertor results in a striated flux footprint pattern on the target plates. As a result, profiles of divertor heat and particle fluxes are compared with experimental data, and possible sources of discrepancy are discussed.« less
A 3D multi-block structured version of the KIVA 2 code
NASA Astrophysics Data System (ADS)
Habachi, C.; Torres, A.
A numerical procedure is developed in the KIVA 2 code for calculating flows in complex geometries. Those geometries consist of an arbitrary number of 3D secondary domains which are connected with any angle to a main region. In this procedure, the governing equations are discretized on a system of partial overlapping structured grids. Calculations are performed in the different meshes of the computation domain which are linked by a fully conservative algorithm. By this numerical technique, calculations in those geometries are possible with a reasonable number of inactive cells involved by a structured code like KIVA 2. This algorithm was validated on an 1D analytical case and a 2D experimental case. It was then used for modeling an industrial problem, a two stroke engine with ports and moving boundaries.
Newly-Developed 3D GRMHD Code and its Application to Jet Formation
NASA Technical Reports Server (NTRS)
Mizuno, Y.; Nishikawa, K.-I.; Koide, S.; Hardee, P.; Fishman, G. J.
2006-01-01
We have developed a new three-dimensional general relativistic magnetohydrodynamic code by using a conservative, high-resolution shock-capturing scheme. The numerical fluxes are calculated using the HLL approximate Riemann solver scheme. The flux-interpolated constrained transport scheme is used to maintain a divergence-free magnetic field. We have performed various 1-dimensional test problems in both special and general relativity by using several reconstruction methods and found that the new 3D GRMHD code shows substantial improvements over our previous model. The . preliminary results show the jet formations from a geometrically thin accretion disk near a non-rotating and a rotating black hole. We will discuss the jet properties depended on the rotation of a black hole and the magnetic field strength.
3D thermo-chemical-mechanical simulation of power ramps with ALCYONE fuel code
NASA Astrophysics Data System (ADS)
Baurens, B.; Sercombe, J.; Riglet-Martial, C.; Desgranges, L.; Trotignon, L.; Maugis, P.
2014-09-01
This paper presents the coupling of the fuel performance code ALCYONE with the thermochemical code ANGE and its application to Iodine-Stress Corrosion Cracking (I-SCC). The coupling is illustrated by a 3D simulation of a power ramp. The release of chemically active gases (CsI(g), Tex(1
Radiation Coupling with the FUN3D Unstructured-Grid CFD Code
NASA Technical Reports Server (NTRS)
Wood, William A.
2012-01-01
The HARA radiation code is fully-coupled to the FUN3D unstructured-grid CFD code for the purpose of simulating high-energy hypersonic flows. The radiation energy source terms and surface heat transfer, under the tangent slab approximation, are included within the fluid dynamic ow solver. The Fire II flight test, at the Mach-31 1643-second trajectory point, is used as a demonstration case. Comparisons are made with an existing structured-grid capability, the LAURA/HARA coupling. The radiative surface heat transfer rates from the present approach match the benchmark values within 6%. Although radiation coupling is the focus of the present work, convective surface heat transfer rates are also reported, and are seen to vary depending upon the choice of mesh connectivity and FUN3D ux reconstruction algorithm. On a tetrahedral-element mesh the convective heating matches the benchmark at the stagnation point, but under-predicts by 15% on the Fire II shoulder. Conversely, on a mixed-element mesh the convective heating over-predicts at the stagnation point by 20%, but matches the benchmark away from the stagnation region.
Quantitative analysis of accuracy of seismic wave-propagation codes in 3D random scattering media
NASA Astrophysics Data System (ADS)
Galis, Martin; Imperatori, Walter; Mai, P. Martin
2013-04-01
Several recent verification studies (e.g. Day et al., 2001; Bielak et al., 2010, Chaljub et al., 2010) have demonstrated the importance of assessing the accuracy of available numerical tools at low frequency in presence of large-scale features (basins, topography, etc.). The fast progress in high-performance computing, including efficient optimization of numerical codes on petascale supercomputers, has permitted the simulation of 3D seismic wave propagation at frequencies of engineering interest (up to 10Hz) in highly heterogeneous media (e.g. Hartzell et al, 2010; Imperatori and Mai, 2013). However, high frequency numerical simulations involving random scattering media, characterized by small-scale heterogeneities, are much more challenging for most numerical methods, and their verification may therefore be even more crucial than in the low-frequency case. Our goal is to quantitatively compare the accuracy and the behavior of three different numerical codes for seismic wave propagation in 3D random scattering media at high frequency. We deploy a point source with omega-squared spectrum, and focus on the near-source region, being of great interest in strong motion seismology. We use two codes based on finite-difference method (FD1 and FD2) and one code based on support-operator method (SO). Both FD1 and FD2 are 4-th order staggered-grid finite-difference codes (for FD1 see Olsen et al., 2009; for FD2 see Moczo et al., 2007). The FD1 and FD2 codes are characterized by slightly different medium representations, since FD1 uses point values of material parameters in each FD-cell, while FD2 uses the effective material parameters at each grid-point (Moczo et al., 2002). SO is 2-nd order support-operator method (Ely et al., 2008). We considered models with random velocity perturbations described by van Karman correlation function with different correlation lengths and different standard deviations. Our results show significant variability in both phase and amplitude as
NASA Astrophysics Data System (ADS)
Inogamov, Nail A.; Zhakhovsky, Vasily V.
2016-02-01
There are many important applications in which the ultrashort diffraction-limited and therefore tightly focused laser pulses irradiates metal films mounted on dielectric substrate. Here we present the detailed picture of laser peeling and 3D structure formation of the thin (relative to a depth of a heat affected zone in the bulk targets) gold films on glass substrate. The underlying physics of such diffraction-limited laser peeling was not well understood previously. Our approach is based on a physical model which takes into consideration the new calculations of the two-temperature (2T) equation of state (2T EoS) and the two-temperature transport coefficients together with the coupling parameter between electron and ion subsystems. The usage of the 2T EoS and the kinetic coefficients is required because absorption of an ultrashort pulse with duration of 10-1000 fs excites electron subsystem of metal and transfers substance into the 2T state with hot electrons (typical electron temperatures 1-3 eV) and much colder ions. It is shown that formation of submicrometer-sized 3D structures is a result of the electron-ion energy transfer, melting, and delamination of film from substrate under combined action of electron and ion pressures, capillary deceleration of the delaminated liquid metal or semiconductor, and ultrafast freezing of molten material. We found that the freezing is going in non-equilibrium regime with strongly overcooled liquid phase. In this case the Stefan approximation is non-applicable because the solidification front speed is limited by the diffusion rate of atoms in the molten material. To solve the problem we have developed the 2T Lagrangian code including all this reach physics in. We also used the high-performance combined Monte- Carlo and molecular dynamics code for simulation of surface 3D nanostructuring at later times after completion of electron-ion relaxation.
Alignment of 3D Building Models and TIR Video Sequences with Line Tracking
NASA Astrophysics Data System (ADS)
Iwaszczuk, D.; Stilla, U.
2014-11-01
Thermal infrared imagery of urban areas became interesting for urban climate investigations and thermal building inspections. Using a flying platform such as UAV or a helicopter for the acquisition and combining the thermal data with the 3D building models via texturing delivers a valuable groundwork for large-area building inspections. However, such thermal textures are useful for further analysis if they are geometrically correctly extracted. This can be achieved with a good coregistrations between the 3D building models and thermal images, which cannot be achieved by direct georeferencing. Hence, this paper presents methodology for alignment of 3D building models and oblique TIR image sequences taken from a flying platform. In a single image line correspondences between model edges and image line segments are found using accumulator approach and based on these correspondences an optimal camera pose is calculated to ensure the best match between the projected model and the image structures. Among the sequence the linear features are tracked based on visibility prediction. The results of the proposed methodology are presented using a TIR image sequence taken from helicopter in a densely built-up urban area. The novelty of this work is given by employing the uncertainty of the 3D building models and by innovative tracking strategy based on a priori knowledge from the 3D building model and the visibility checking.
Visual storytelling in 2D and stereoscopic 3D video: effect of blur on visual attention
NASA Astrophysics Data System (ADS)
Huynh-Thu, Quan; Vienne, Cyril; Blondé, Laurent
2013-03-01
Visual attention is an inherent mechanism that plays an important role in the human visual perception. As our visual system has limited capacity and cannot efficiently process the information from the entire visual field, we focus our attention on specific areas of interest in the image for detailed analysis of these areas. In the context of media entertainment, the viewers' visual attention deployment is also influenced by the art of visual storytelling. To this date, visual editing and composition of scenes in stereoscopic 3D content creation still mostly follows those used in 2D. In particular, out-of-focus blur is often used in 2D motion pictures and photography to drive the viewer's attention towards a sharp area of the image. In this paper, we study specifically the impact of defocused foreground objects on visual attention deployment in stereoscopic 3D content. For that purpose, we conducted a subjective experiment using an eyetracker. Our results bring more insights on the deployment of visual attention in stereoscopic 3D content viewing, and provide further understanding on visual attention behavior differences between 2D and 3D. Our results show that a traditional 2D scene compositing approach such as the use of foreground blur does not necessarily produce the same effect on visual attention deployment in 2D and 3D. Implications for stereoscopic content creation and visual fatigue are discussed.
3-D TECATE/BREW: Thermal, stress, and birefringent ray-tracing codes for solid-state laser design
Gelinas, R.J.; Doss, S.K.; Nelson, R.G.
1994-07-20
This report describes the physics, code formulations, and numerics that are used in the TECATE (totally Eulerian code for anisotropic thermo-elasticity) and BREW (birefringent ray-tracing of electromagnetic waves) codes for laser design. These codes resolve thermal, stress, and birefringent optical effects in 3-D stationary solid-state systems. This suite of three constituent codes is a package referred to as LASRPAK.
Development and preliminary verification of the 3D core neutronic code: COCO
Lu, H.; Mo, K.; Li, W.; Bai, N.; Li, J.
2012-07-01
As the recent blooming economic growth and following environmental concerns (China)) is proactively pushing forward nuclear power development and encouraging the tapping of clean energy. Under this situation, CGNPC, as one of the largest energy enterprises in China, is planning to develop its own nuclear related technology in order to support more and more nuclear plants either under construction or being operation. This paper introduces the recent progress in software development for CGNPC. The focus is placed on the physical models and preliminary verification results during the recent development of the 3D Core Neutronic Code: COCO. In the COCO code, the non-linear Green's function method is employed to calculate the neutron flux. In order to use the discontinuity factor, the Neumann (second kind) boundary condition is utilized in the Green's function nodal method. Additionally, the COCO code also includes the necessary physical models, e.g. single-channel thermal-hydraulic module, burnup module, pin power reconstruction module and cross-section interpolation module. The preliminary verification result shows that the COCO code is sufficient for reactor core design and analysis for pressurized water reactor (PWR). (authors)
A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.
Yu, Jun; Wang, Zeng-Fu
2015-05-01
A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction. PMID:25122851
Real-Depth imaging: a new (no glasses) 3D imaging technology with video/data projection applications
NASA Astrophysics Data System (ADS)
Dolgoff, Eugene
1997-05-01
Floating Images, Inc. has developed the software and hardware for anew, patent pending, 'floating 3D, off-the- screen-experience' display technology. This technology has the potential to become the next standard for home and arcade video games, computers, corporate presentations, Internet/Intranet viewing, and television. Current '3D Graphics' technologies are actually flat on screen. Floating Images technology actually produce images at different depths from any display, such as CRT and LCD, for television, computer, projection, and other formats. In addition, unlike stereoscopic 3D imaging, no glasses, headgear, or other viewing aids are used. And, unlike current autostereoscopic imaging technologies, there is virtually no restriction on where viewers can sit to view the images, with no 'bad' or 'dead' zones, flipping, or pseudoscopy. In addition to providing traditional depth cues such as perspective and background image occlusion, the new technology also provides both horizontal and vertical binocular parallax and accommodation which coincides with convergence. Since accommodation coincides with convergence, viewing these images doesn't produce headaches, fatigue, or eye-strain, regardless of how long they are viewed. The imagery must either be formatted for the Floating Images platform when written, or existing software can be reformatted without much difficult. The optical hardware system can be made to accommodate virtually any projection system to produce Floating Images for the Boardroom, video arcade, stage shows, or the classroom.
Film grain noise modeling in advanced video coding
NASA Astrophysics Data System (ADS)
Oh, Byung Tae; Kuo, C.-C. Jay; Sun, Shijun; Lei, Shawmin
2007-01-01
A new technique for film grain noise extraction, modeling and synthesis is proposed and applied to the coding of high definition video in this work. The film grain noise is viewed as a part of artistic presentation by people in the movie industry. On one hand, since the film grain noise can boost the natural appearance of pictures in high definition video, it should be preserved in high-fidelity video processing systems. On the other hand, video coding with film grain noise is expensive. It is desirable to extract film grain noise from the input video as a pre-processing step at the encoder and re-synthesize the film grain noise and add it back to the decoded video as a post-processing step at the decoder. Under this framework, the coding gain of the denoised video is higher while the quality of the final reconstructed video can still be well preserved. Following this idea, we present a method to remove film grain noise from image/video without distorting its original content. Besides, we describe a parametric model containing a small set of parameters to represent the extracted film grain noise. The proposed model generates the film grain noise that is close to the real one in terms of power spectral density and cross-channel spectral correlation. Experimental results are shown to demonstrate the efficiency of the proposed scheme.
ERIC Educational Resources Information Center
Smith, Dennie; McLaughlin, Tim; Brown, Irving
2012-01-01
This study explored computer animation vignettes as a replacement for live-action video scenarios of classroom behavior situations previously used as an instructional resource in teacher education courses in classroom management strategies. The focus of the research was to determine if the embedded behavioral information perceived in a live-action…
Cheong, Fook Chiong; Wong, Chui Ching; Gao, YunFeng; Nai, Mui Hoon; Cui, Yidan; Park, Sungsu; Kenney, Linda J.; Lim, Chwee Teck
2015-01-01
Tracking fast-swimming bacteria in three dimensions can be extremely challenging with current optical techniques and a microscopic approach that can rapidly acquire volumetric information is required. Here, we introduce phase-contrast holographic video microscopy as a solution for the simultaneous tracking of multiple fast moving cells in three dimensions. This technique uses interference patterns formed between the scattered and the incident field to infer the three-dimensional (3D) position and size of bacteria. Using this optical approach, motility dynamics of multiple bacteria in three dimensions, such as speed and turn angles, can be obtained within minutes. We demonstrated the feasibility of this method by effectively tracking multiple bacteria species, including Escherichia coli, Agrobacterium tumefaciens, and Pseudomonas aeruginosa. In addition, we combined our fast 3D imaging technique with a microfluidic device to present an example of a drug/chemical assay to study effects on bacterial motility. PMID:25762336
Hardware-based JPEG 2000 video coding system
NASA Astrophysics Data System (ADS)
Schuchter, Arthur R.; Uhl, Andreas
2007-02-01
In this paper, we discuss a hardware based low complexity JPEG 2000 video coding system. The hardware system is based on a software simulation system, where temporal redundancy is exploited by coding of differential frames which are arranged in an adaptive GOP structure whereby the GOP structure itself is determined by statistical analysis of differential frames. We present a hardware video coding architecture which applies this inter-frame coding system to a Digital Signal Processor (DSP). The system consists mainly of a microprocessor (ADSP-BF533 Blackfin Processor) and a JPEG 2000 chip (ADV202).
An HEVC extension for spatial and quality scalable video coding
NASA Astrophysics Data System (ADS)
Hinz, Tobias; Helle, Philipp; Lakshman, Haricharan; Siekmann, Mischa; Stegemann, Jan; Schwarz, Heiko; Marpe, Detlev; Wiegand, Thomas
2013-02-01
This paper describes an extension of the upcoming High Efficiency Video Coding (HEVC) standard for supporting spatial and quality scalable video coding. Besides scalable coding tools known from scalable profiles of prior video coding standards such as H.262/MPEG-2 Video and H.264/MPEG-4 AVC, the proposed scalable HEVC extension includes new coding tools that further improve the coding efficiency of the enhancement layer. In particular, new coding modes by which base and enhancement layer signals are combined for forming an improved enhancement layer prediction signal have been added. All scalable coding tools have been integrated in a way that the low-level syntax and decoding process of HEVC remain unchanged to a large extent. Simulation results for typical application scenarios demonstrate the effectiveness of the proposed design. For spatial and quality scalable coding with two layers, bit-rate savings of about 20-30% have been measured relative to simulcasting the layers, which corresponds to a bit-rate overhead of about 5-15% relative to single-layer coding of the enhancement layer.
Simulation of a Synthetic Jet in Quiescent Air Using TLNS3D Flow Code
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.; Turkel, Eli
2007-01-01
Although the actuator geometry is highly three-dimensional, the outer flowfield is nominally two-dimensional because of the high aspect ratio of the rectangular slot. For the present study, this configuration is modeled as a two-dimensional problem. A multi-block structured grid available at the CFDVAL2004 website is used as a baseline grid. The periodic motion of the diaphragm is simulated by specifying a sinusoidal velocity at the diaphragm surface with a frequency of 450 Hz, corresponding to the experimental setup. The amplitude is chosen so that the maximum Mach number at the jet exit is approximately 0.1, to replicate the experimental conditions. At the solid walls zero slip, zero injection, adiabatic temperature and zero pressure gradient conditions are imposed. In the external region, symmetry conditions are imposed on the side (vertical) boundaries and far-field conditions are imposed on the top boundary. A nominal free-stream Mach number of 0.001 is imposed in the free stream to simulate incompressible flow conditions in the TLNS3D code, which solves compressible flow equations. The code was run in unsteady (URANS) mode until the periodicity was established. The time-mean quantities were obtained by running the code for at least another 15 periods and averaging the flow quantities over these periods. The phase-locked average of flow quantities were assumed to be coincident with their values during the last full time period.
Code verification for unsteady 3-D fluid-solid interaction problems
NASA Astrophysics Data System (ADS)
Yu, Kintak Raymond; Étienne, Stéphane; Hay, Alexander; Pelletier, Dominique
2015-12-01
This paper describes a procedure to synthesize Manufactured Solutions for Code Verification of an important class of Fluid-Structure Interaction (FSI) problems whose behaviors can be modeled as rigid body vibrations in incompressible fluids. We refer this class of FSI problems as Fluid-Solid Interaction problems, which can be found in many practical engineering applications. The methodology can be utilized to develop Manufactured Solutions for both 2-D and 3-D cases. We demonstrate the procedure with our numerical code. We present details of the formulation and methodology. We also provide the reasonings behind our proposed approach. Results from grid and time step refinement studies confirm the verification of our solver and demonstrate the versatility of the simple synthesis procedure. In addition, the results also demonstrate that the modified decoupled approach to verify flow problems with high-order time-stepping schemes can be employed equally well to verify code for multi-physics problems (here, those of the Fluid-Solid Interaction) when the numerical discretization is based on the Method of Lines.
A 3D Parallel Beam Dynamics Code for Modeling High Brightness Beams in Photoinjectors
Qiang, Ji; Lidia, S.; Ryne, R.D.; Limborg, C.; /SLAC
2006-02-13
In this paper we report on IMPACT-T, a 3D beam dynamics code for modeling high brightness beams in photoinjectors and rf linacs. IMPACT-T is one of the few codes used in the photoinjector community that has a parallel implementation, making it very useful for high statistics simulations of beam halos and beam diagnostics. It has a comprehensive set of beamline elements, and furthermore allows arbitrary overlap of their fields. It is unique in its use of space-charge solvers based on an integrated Green function to efficiently and accurately treat beams with large aspect ratio, and a shifted Green function to efficiently treat image charge effects of a cathode. It is also unique in its inclusion of energy binning in the space-charge calculation to model beams with large energy spread. Together, all these features make IMPACT-T a powerful and versatile tool for modeling beams in photoinjectors and other systems. In this paper we describe the code features and present results of IMPACT-T simulations of the LCLS photoinjectors. We also include a comparison of IMPACT-T and PARMELA results.
A 3d Parallel Beam Dynamics Code for Modeling High BrightnessBeams in Photoinjectors
Qiang, J.; Lidia, S.; Ryne, R.; Limborg, C.
2005-05-16
In this paper we report on IMPACT-T, a 3D beam dynamics code for modeling high brightness beams in photoinjectors and rf linacs. IMPACT-T is one of the few codes used in the photoinjector community that has a parallel implementation, making it very useful for high statistics simulations of beam halos and beam diagnostics. It has a comprehensive set of beamline elements, and furthermore allows arbitrary overlap of their fields. It is unique in its use of space-charge solvers based on an integrated Green function to efficiently and accurately treat beams with large aspect ratio, and a shifted Green function to efficiently treat image charge effects of a cathode. It is also unique in its inclusion of energy binning in the space-charge calculation to model beams with large energy spread. Together, all these features make IMPACT-T a powerful and versatile tool for modeling beams in photoinjectors and other systems. In this paper we describe the code features and present results of IMPACT-T simulations of the LCLS photoinjectors. We also include a comparison of IMPACT-T and PARMELA results.
Liner Optimization Studies Using the Ducted Fan Noise Prediction Code TBIEM3D
NASA Technical Reports Server (NTRS)
Dunn, M. H.; Farassat, F.
1998-01-01
In this paper we demonstrate the usefulness of the ducted fan noise prediction code TBIEM3D as a liner optimization design tool. Boundary conditions on the interior duct wall allow for hard walls or a locally reacting liner with axially segmented, circumferentially uniform impedance. Two liner optimization studies are considered in which farfield noise attenuation due to the presence of a liner is maximized by adjusting the liner impedance. In the first example, the dependence of optimal liner impedance on frequency and liner length is examined. Results show that both the optimal impedance and attenuation levels are significantly influenced by liner length and frequency. In the second example, TBIEM3D is used to compare radiated sound pressure levels between optimal and non-optimal liner cases at conditions designed to simulate take-off. It is shown that significant noise reduction is achieved for most of the sound field by selecting the optimal or near optimal liner impedance. Our results also indicate that there is relatively large region of the impedance plane over which optimal or near optimal liner behavior is attainable. This is an important conclusion for the designer since there are variations in liner characteristics due to manufacturing imprecisions.
GATOR: A 3-D time-dependent simulation code for helix TWTs
Zaidman, E.G.; Freund, H.P.
1996-12-31
A 3D nonlinear analysis of helix TWTs is presented. The analysis and simulation code is based upon a spectral decomposition using the vacuum sheath helix modes. The field equations are integrated on a grid and advanced in time using a MacCormack predictor-corrector scheme, and the electron orbit equations are integrated using a fourth order Runge-Kutta algorithm. Charge is accumulated on the grid and the field is interpolated to the particle location by a linear map. The effect of dielectric liners on the vacuum sheath helix dispersion is included in the analysis. Several numerical cases are considered. Simulation of the injection of a DC beam and a signal at a single frequency is compared with a linear field theory of the helix TWT interaction, and good agreement is found.
Semantic-preload video model based on VOP coding
NASA Astrophysics Data System (ADS)
Yang, Jianping; Zhang, Jie; Chen, Xiangjun
2013-03-01
In recent years, in order to reduce semantic gap which exists between high-level semantics and low-level features of video when the human understanding image or video, people mostly try the method of video annotation where in signal's downstream, namely further (again) attach labels to the content in video-database. Few people focus on the idea that: Use limited interaction and the means of comprehensive segmentation (including optical technologies) from the front-end of collection of video information (i.e. video camera), with video semantics analysis technology and corresponding concepts sets (i.e. ontology) which belong in a certain domain, as well as story shooting script and the task description of scene shooting etc; Apply different-level semantic descriptions to enrich the attributes of video object and the attributes of image region, then forms a new video model which is based on Video Object Plan (VOP) Coding. This model has potential intellectualized features, and carries a large amount of metadata, and embedded intermediate-level semantic concept into every object. This paper focuses on the latter, and presents a framework of a new video model. At present, this new video model is temporarily named "Video Model of Semantic-Preloaded or Semantic-Preload Video Model (simplified into VMoSP or SPVM)". This model mainly researches how to add labeling to video objects and image regions in real time, here video object and image region are usually used intermediate semantic labeling, and this work is placed on signal's upstream (i.e. video capture production stage). Because of the research needs, this paper also tries to analyses the hierarchic structure of video, and divides the hierarchic structure into nine hierarchy semantic levels, of course, this nine hierarchy only involved in video production process. In addition, the paper also point out that here semantic level tagging work (i.e. semantic preloading) only refers to the four middle-level semantic. All in
A 3-D nonlinear recursive digital filter for video image processing
NASA Technical Reports Server (NTRS)
Bauer, P. H.; Qian, W.
1991-01-01
This paper introduces a recursive 3-D nonlinear digital filter, which is capable of performing noise suppression without degrading important image information such as edges in space or time. It also has the property of unnoticeable bandwidth reduction immediately after a scene change, which makes the filter an attractive preprocessor to many interframe compression algorithms. The filter consists of a nonlinear 2-D spatial subfilter and a 1-D temporal filter. In order to achieve the required computational speed and increase the flexibility of the filter, all of the linear shift-variant filter modules are of the IIR type.
2013-06-24
Version 07 TART2012 is a coupled neutron-photon Monte Carlo transport code designed to use three-dimensional (3-D) combinatorial geometry. Neutron and/or photon sources as well as neutron induced photon production can be tracked. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART2012 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared tomore » other similar codes. Use of the entire system can save you a great deal of time and energy. TART2012 extends the general utility of the code to even more areas of application than available in previous releases by concentrating on improving the physics, particularly with regard to improved treatment of neutron fission, resonance self-shielding, molecular binding, and extending input options used by the code. Several utilities are included for creating input files and displaying TART results and data. TART2012 uses the latest ENDF/B-VI, Release 8, data. New for TART2012 is the use of continuous energy neutron cross sections, in addition to its traditional multigroup cross sections. For neutron interaction, the data are derived using ENDF-ENDL2005 and include both continuous energy cross sections and 700 group neutron data derived using a combination of ENDF/B-VI, Release 8, and ENDL data. The 700 group structure extends from 10-5 eV up to 1 GeV. Presently nuclear data are only available up to 20 MeV, so that only 616 of the groups are currently used. For photon interaction, 701 point photon data were derived using the Livermore EPDL97 file. The new 701 point structure extends from 100 eV up to 1 GeV, and is currently used over this entire energy range. TART2012 completely supersedes all older versions of TART, and it is strongly recommended that one use only the most recent version of TART2012 and its data files. Check authors homepage for related information: http
Validation of Heat Transfer and Film Cooling Capabilities of the 3-D RANS Code TURBO
NASA Technical Reports Server (NTRS)
Shyam, Vikram; Ameri, Ali; Chen, Jen-Ping
2010-01-01
The capabilities of the 3-D unsteady RANS code TURBO have been extended to include heat transfer and film cooling applications. The results of simulations performed with the modified code are compared to experiment and to theory, where applicable. Wilcox s k-turbulence model has been implemented to close the RANS equations. Two simulations are conducted: (1) flow over a flat plate and (2) flow over an adiabatic flat plate cooled by one hole inclined at 35 to the free stream. For (1) agreement with theory is found to be excellent for heat transfer, represented by local Nusselt number, and quite good for momentum, as represented by the local skin friction coefficient. This report compares the local skin friction coefficients and Nusselt numbers on a flat plate obtained using Wilcox's k-model with the theory of Blasius. The study looks at laminar and turbulent flows over an adiabatic flat plate and over an isothermal flat plate for two different wall temperatures. It is shown that TURBO is able to accurately predict heat transfer on a flat plate. For (2) TURBO shows good qualitative agreement with film cooling experiments performed on a flat plate with one cooling hole. Quantitatively, film effectiveness is under predicted downstream of the hole.
Quantum self-correction in the 3D cubic code model.
Bravyi, Sergey; Haah, Jeongwan
2013-11-15
A big open question in the quantum information theory concerns the feasibility of a self-correcting quantum memory. A quantum state recorded in such memory can be stored reliably for a macroscopic time without need for active error correction, if the memory is in contact with a cold enough thermal bath. Here we report analytic and numerical evidence for self-correcting behavior in the quantum spin lattice model known as the 3D cubic code. We prove that its memory time is at least L(cβ), where L is the lattice size, β is the inverse temperature of the bath, and c>0 is a constant coefficient. However, this bound applies only if the lattice size L does not exceed a critical value which grows exponentially with β. In that sense, the model can be called a partially self-correcting memory. We also report a Monte Carlo simulation indicating that our analytic bounds on the memory time are tight up to constant coefficients. To model the readout step we introduce a new decoding algorithm, which can be implemented efficiently for any topological stabilizer code. A longer version of this work can be found in Bravyi and Haah, arXiv:1112.3252. PMID:24289671
Quantum Self-Correction in the 3D Cubic Code Model
NASA Astrophysics Data System (ADS)
Bravyi, Sergey; Haah, Jeongwan
2013-11-01
A big open question in the quantum information theory concerns the feasibility of a self-correcting quantum memory. A quantum state recorded in such memory can be stored reliably for a macroscopic time without need for active error correction, if the memory is in contact with a cold enough thermal bath. Here we report analytic and numerical evidence for self-correcting behavior in the quantum spin lattice model known as the 3D cubic code. We prove that its memory time is at least Lcβ, where L is the lattice size, β is the inverse temperature of the bath, and c>0 is a constant coefficient. However, this bound applies only if the lattice size L does not exceed a critical value which grows exponentially with β. In that sense, the model can be called a partially self-correcting memory. We also report a Monte Carlo simulation indicating that our analytic bounds on the memory time are tight up to constant coefficients. To model the readout step we introduce a new decoding algorithm, which can be implemented efficiently for any topological stabilizer code. A longer version of this work can be found in Bravyi and Haah, arXiv:1112.3252.
Studies of coupled cavity LINAC (CCL) accelerating structures with 3-D codes
Spalek, G.
2000-08-01
The cw CCL being designed for the Accelerator Production of Tritium (APT) project accelerates protons from 96 MeV to 211 MeV. It consists of 99 segments each containing up to seven accelerating cavities. Segments are coupled by intersegment coupling cavities and grouped into supermodules. The design method needs to address not only basic cavity sizing for a given coupling and pi/2 mode frequency, but also the effects of high power densities on the cavity frequency, mechanical stresses, and the structure's stop band during operation. On the APT project, 3-D RF (Ansoft Corp.'s HFSS) and coupled RF/structural (Ansys Inc.'s ANSYS) codes are being used. to develop tools to address the above issues and guide cooling channel design. The code's predictions are being checked against available low power Aluminum models. Stop band behavior under power will be checked once the tools are extended to CCDTL structures that have been tested at high power. A summary of calculations made to date and agreement with measured results will be presented.
Status and future of the 3D MAFIA group of codes
NASA Astrophysics Data System (ADS)
Ebeling, F.; Klatt, R.; Krawzcyk, F.; Lawinsky, E.; Weiland, T.; Wipf, S. G.; Steffen, B.; Barts, T.; Browman, J.; Cooper, R. K.; Rodenz, G.
1988-12-01
The group of fully three dimensional computer codes for solving Maxwell's equations for a wide range of applications, MAFIA, is already well established. Extensive comparisons with measurements have demonstrated the accuracy of the computations. A large numer of components have been designed for accelerators, such as kicker magnets, non cyclindrical cavities, ferrite loaded cavities, vacuum chambers with slots and transitions, etc. The latest additions to the system include a new static solver that can calculate 3D magneto- and electrostatic fields, and a self consistent version of the 2D-BCI that solves the field equations and the equations of motion in parallel. Work on new eddy current modules has started, which will allow treatment of laminated and/or solid iron cores excited by low frequency currents. Based on our experience with the present releases 1 and 2, we have started a complete revision of the whole user interface and data structure, which will make the codes even more user-friendly and flexible.
ORPHEE research reactor: 3D core depletion calculation using Monte-Carlo code TRIPOLI-4®
NASA Astrophysics Data System (ADS)
Damian, F.; Brun, E.
2014-06-01
ORPHEE is a research reactor located at CEA Saclay. It aims at producing neutron beams for experiments. This is a pool-type reactor (heavy water), and the core is cooled by light water. Its thermal power is 14 MW. ORPHEE core is 90 cm height and has a cross section of 27x27 cm2. It is loaded with eight fuel assemblies characterized by a various number of fuel plates. The fuel plate is composed of aluminium and High Enriched Uranium (HEU). It is a once through core with a fuel cycle length of approximately 100 Equivalent Full Power Days (EFPD) and with a maximum burnup of 40%. Various analyses under progress at CEA concern the determination of the core neutronic parameters during irradiation. Taking into consideration the geometrical complexity of the core and the quasi absence of thermal feedback for nominal operation, the 3D core depletion calculations are performed using the Monte-Carlo code TRIPOLI-4® [1,2,3]. A preliminary validation of the depletion calculation was performed on a 2D core configuration by comparison with the deterministic transport code APOLLO2 [4]. The analysis showed the reliability of TRIPOLI-4® to calculate a complex core configuration using a large number of depleting regions with a high level of confidence.
Study and simulation of low rate video coding schemes
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Chen, Yun-Chung; Kipp, G.
1992-01-01
The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design.
FERM3D: A finite element R-matrix electron molecule scattering code
NASA Astrophysics Data System (ADS)
Tonzani, Stefano
2007-01-01
FERM3D is a three-dimensional finite element program, for the elastic scattering of a low energy electron from a general polyatomic molecule, which is converted to a potential scattering problem. The code is based on tricubic polynomials in spherical coordinates. The electron-molecule interaction is treated as a sum of three terms: electrostatic, exchange, and polarization. The electrostatic term can be extracted directly from ab initio codes ( GAUSSIAN 98 in the work described here), while the exchange term is approximated using a local density functional. A local polarization potential based on density functional theory [C. Lee, W. Yang, R.G. Parr, Phys. Rev. B 37 (1988) 785] describes the long range attraction to the molecular target induced by the scattering electron. Photoionization calculations are also possible and illustrated in the present work. The generality and simplicity of the approach is important in extending electron-scattering calculations to more complex targets than it is possible with other methods. Program summaryTitle of program:FERM3D Catalogue identifier:ADYL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYL_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested:Intel Xeon, AMD Opteron 64 bit, Compaq Alpha Operating systems or monitors under which the program has been tested:HP Tru64 Unix v5.1, Red Hat Linux Enterprise 3 Programming language used:Fortran 90 Memory required to execute with typical data:900 MB (neutral CO 2), 2.3 GB (ionic CO 2), 1.4 GB (benzene) No. of bits in a word:32 No. of processors used:1 Has the code been vectorized?:No No. of lines in distributed program, including test data, etc.:58 383 No. of bytes in distributed program, including test data, etc.:561 653 Distribution format:tar.gzip file CPC Program library subprograms used:ADDA, ACDP Nature of physical problem:Scattering of an
Adaptation of video game UVW mapping to 3D visualization of gene expression patterns
NASA Astrophysics Data System (ADS)
Vize, Peter D.; Gerth, Victor E.
2007-01-01
Analysis of gene expression patterns within an organism plays a critical role in associating genes with biological processes in both health and disease. During embryonic development the analysis and comparison of different gene expression patterns allows biologists to identify candidate genes that may regulate the formation of normal tissues and organs and to search for genes associated with congenital diseases. No two individual embryos, or organs, are exactly the same shape or size so comparing spatial gene expression in one embryo to that in another is difficult. We will present our efforts in comparing gene expression data collected using both volumetric and projection approaches. Volumetric data is highly accurate but difficult to process and compare. Projection methods use UV mapping to align texture maps to standardized spatial frameworks. This approach is less accurate but is very rapid and requires very little processing. We have built a database of over 180 3D models depicting gene expression patterns mapped onto the surface of spline based embryo models. Gene expression data in different models can easily be compared to determine common regions of activity. Visualization software, both Java and OpenGL optimized for viewing 3D gene expression data will also be demonstrated.
NASA Astrophysics Data System (ADS)
Zhang, Yujia; Yilmaz, Alper
2016-06-01
Surface reconstruction using coded structured light is considered one of the most reliable techniques for high-quality 3D scanning. With a calibrated projector-camera stereo system, a light pattern is projected onto the scene and imaged by the camera. Correspondences between projected and recovered patterns are computed in the decoding process, which is used to generate 3D point cloud of the surface. However, the indirect illumination effects on the surface, such as subsurface scattering and interreflections, will raise the difficulties in reconstruction. In this paper, we apply maximum min-SW gray code to reduce the indirect illumination effects of the specular surface. We also analysis the errors when comparing the maximum min-SW gray code and the conventional gray code, which justifies that the maximum min-SW gray code has significant superiority to reduce the indirect illumination effects. To achieve sub-pixel accuracy, we project high frequency sinusoidal patterns onto the scene simultaneously. But for specular surface, the high frequency patterns are susceptible to decoding errors. Incorrect decoding of high frequency patterns will result in a loss of depth resolution. Our method to resolve this problem is combining the low frequency maximum min-SW gray code and the high frequency phase shifting code, which achieves dense 3D reconstruction for specular surface. Our contributions include: (i) A complete setup of the structured light based 3D scanning system; (ii) A novel combination technique of the maximum min-SW gray code and phase shifting code. First, phase shifting decoding with sub-pixel accuracy. Then, the maximum min-SW gray code is used to resolve the ambiguity resolution. According to the experimental results and data analysis, our structured light based 3D scanning system enables high quality dense reconstruction of scenes with a small number of images. Qualitative and quantitative comparisons are performed to extract the advantages of our new
SCTP as scalable video coding transport
NASA Astrophysics Data System (ADS)
Ortiz, Jordi; Graciá, Eduardo Martínez; Skarmeta, Antonio F.
2013-12-01
This study presents an evaluation of the Stream Transmission Control Protocol (SCTP) for the transport of the scalable video codec (SVC), proposed by MPEG as an extension to H.264/AVC. Both technologies fit together properly. On the one hand, SVC permits to split easily the bitstream into substreams carrying different video layers, each with different importance for the reconstruction of the complete video sequence at the receiver end. On the other hand, SCTP includes features, such as the multi-streaming and multi-homing capabilities, that permit to transport robustly and efficiently the SVC layers. Several transmission strategies supported on baseline SCTP and its concurrent multipath transfer (CMT) extension are compared with the classical solutions based on the Transmission Control Protocol (TCP) and the Realtime Transmission Protocol (RTP). Using ns-2 simulations, it is shown that CMT-SCTP outperforms TCP and RTP in error-prone networking environments. The comparison is established according to several performance measurements, including delay, throughput, packet loss, and peak signal-to-noise ratio of the received video.
Description of FEL3D: A three dimensional simulation code for TOK and FEL
Dutt, S.; Friedman, A.; Gover, A.
1988-10-20
FEL3D is a three dimensional simulation code, written for the purpose of calculating the parameters of coherent radiation emitted by electrons in an undulator. The program was written predominantly for simulating the coherent super-radiant harmonic frequency emission of electrons which are being bunched by an external laser beam while propagating in an undulator magnet. This super-radiant emission is to be studied in the TOK (transverse optical klystron) experiment, which is under construction in the NSLS department at Brookhaven National Laboratory. The program can also calculate the stimulated emission radiometric properties of a free electron laser (FEL) taking into account three dimensional effects. While this application is presently limited to the small gain operation regime of FEL's, extension to the high gain regime is expected to be relatively easy. The code is based on a semi-analytical concept. Instead of a full numerical solution of the Maxwell-Lorentz equations, the trajectories of the electron in the wiggler field are calculated analytically, and the radiation fields are expanded in terms of free space eigen-modes. This approach permits efficient computation, with a computation time of about 0.1 sec/electron on the BNL IBM 3090. The code reflects the important three dimensional features of the electron beam, the modulating laser beam, and the emitted radiation field. The statistical approach is based on averaging over the electron initial conditions according to a given distribution function in phase space, rather than via Monte-Carlo simulation. The present version of the program is written for uniform periodic wiggler field, but extension to nonuniform fields is straightforward. 4 figs., 5 tabs.
NASA Astrophysics Data System (ADS)
Skála, J.; Baruffa, F.; Büchner, J.; Rampp, M.
2015-08-01
Context. The numerical simulation of turbulence and flows in almost ideal astrophysical plasmas with large Reynolds numbers motivates the implementation of magnetohydrodynamical (MHD) computer codes with low resistivity. They need to be computationally efficient and scale well with large numbers of CPU cores, allow obtaining a high grid resolution over large simulation domains, and be easily and modularly extensible, for instance, to new initial and boundary conditions. Aims: Our aims are the implementation, optimization, and verification of a computationally efficient, highly scalable, and easily extensible low-dissipative MHD simulation code for the numerical investigation of the dynamics of astrophysical plasmas with large Reynolds numbers in three dimensions (3D). Methods: The new GOEMHD3 code discretizes the ideal part of the MHD equations using a fast and efficient leap-frog scheme that is second-order accurate in space and time and whose initial and boundary conditions can easily be modified. For the investigation of diffusive and dissipative processes the corresponding terms are discretized by a DuFort-Frankel scheme. To always fulfill the Courant-Friedrichs-Lewy stability criterion, the time step of the code is adapted dynamically. Numerically induced local oscillations are suppressed by explicit, externally controlled diffusion terms. Non-equidistant grids are implemented, which enhance the spatial resolution, where needed. GOEMHD3 is parallelized based on the hybrid MPI-OpenMP programing paradigm, adopting a standard two-dimensional domain-decomposition approach. Results: The ideal part of the equation solver is verified by performing numerical tests of the evolution of the well-understood Kelvin-Helmholtz instability and of Orszag-Tang vortices. The accuracy of solving the (resistive) induction equation is tested by simulating the decay of a cylindrical current column. Furthermore, we show that the computational performance of the code scales very
Anderson, D.V.; Cohen, R.H.; Ferguson, J.R.; Johnston, B.M.; Sharp, C.B.; Willmann, P.A.
1981-06-30
The single particle orbit code, TIBRO, has been modified extensively to improve the interpolation methods used and to allow use of vector potential fields in the simulation of charged particle orbits on a 3D domain. A 3D cubic B-spline algorithm is used to generate spline coefficients used in the interpolation. Smooth and accurate field representations are obtained. When vector potential fields are used, the 3D cubic spline interpolation formula analytically generates the magnetic field used to push the particles. This field has del.BETA = 0 to computer roundoff. When magnetic induction is used the interpolation allows del.BETA does not equal 0, which can lead to significant nonphysical results. Presently the code assumes quadrupole symmetry, but this is not an essential feature of the code and could be easily removed for other applications. Many details pertaining to this code are given on microfiche accompanying this report.
NASA Astrophysics Data System (ADS)
Meléndez, A.; Korenaga, J.; Sallarès, V.; Miniussi, A.; Ranero, C. R.
2015-10-01
We present a new 3-D traveltime tomography code (TOMO3D) for the modelling of active-source seismic data that uses the arrival times of both refracted and reflected seismic phases to derive the velocity distribution and the geometry of reflecting boundaries in the subsurface. This code is based on its popular 2-D version TOMO2D from which it inherited the methods to solve the forward and inverse problems. The traveltime calculations are done using a hybrid ray-tracing technique combining the graph and bending methods. The LSQR algorithm is used to perform the iterative regularized inversion to improve the initial velocity and depth models. In order to cope with an increased computational demand due to the incorporation of the third dimension, the forward problem solver, which takes most of the run time (˜90 per cent in the test presented here), has been parallelized with a combination of multi-processing and message passing interface standards. This parallelization distributes the ray-tracing and traveltime calculations among available computational resources. The code's performance is illustrated with a realistic synthetic example, including a checkerboard anomaly and two reflectors, which simulates the geometry of a subduction zone. The code is designed to invert for a single reflector at a time. A data-driven layer-stripping strategy is proposed for cases involving multiple reflectors, and it is tested for the successive inversion of the two reflectors. Layers are bound by consecutive reflectors, and an initial velocity model for each inversion step incorporates the results from previous steps. This strategy poses simpler inversion problems at each step, allowing the recovery of strong velocity discontinuities that would otherwise be smoothened.
NASA Astrophysics Data System (ADS)
Seo, Kwang-Deok; Chi, Won Sup; Lee, In Ki; Chang, Dae-Ig
2010-10-01
We propose a joint-source-channel coding (JSCC) scheme that can provide and sustain high-quality video service in spite of deteriorated transmission channel conditions of the second generation of the digital video broadcasting (DVB-S2) satellite broadcasting service. Especially by combining the layered characteristics of the SVC (scalable video coding) video and the robust channel coding capability of LDPC (low-density parity check) employed for DVB-S2, a new concept of JSCC for digital satellite broadcasting service is developed. Rain attenuation in high-frequency bands such as the Ka band is a major factor for lowering the link capacity in satellite broadcasting service. Therefore, it is necessary to devise a new technology to dynamically manage the rain attenuation by adopting a JSCC scheme that can apply variable code rates for both source and channel coding. For this purpose, we develop a JSCC scheme by combining SVC and LDPC, and prove the performance of the proposed JSCC scheme by extensive simulations where SVC coded video is transmitted over various error-prone channels with AWGN (additive white Gaussian noise) patterns in DVB-S2 broadcasting service.
Coding scheme for wireless video transport with reduced frame skipping
NASA Astrophysics Data System (ADS)
Aramvith, Supavadee; Sun, Ming-Ting
2000-05-01
We investigate the scenario of using the Automatic Repeat reQuest (ARQ) retransmission scheme for two-way low bit-rate video communications over wireless Rayleigh fading channels. We show that during the retransmission of error packets, due to the reduced channel throughput, the video encoder buffer may fill-up quickly and cause the TMN8 rate-control algorithm to significantly reduce the bits allocated to each video frame. This results in Peak Signal-to-Noise Ratio (PSNR) degradation and many skipper frames. To reduce the number of frames skipped, in this paper we propose a coding scheme which takes into consideration the effects of the video buffer fill-up, an a priori channel model, the channel feedback information, and hybrid ARQ/FEC. The simulation results indicate that our proposed scheme encode the video sequences with much fewer frame skipping and with higher PSNR compared to H.263 TMN8.
EZBC video streaming with channel coding and error concealment
NASA Astrophysics Data System (ADS)
Bajic, Ivan V.; Woods, John W.
2003-06-01
In this text we present a system for streaming video content encoded using the motion-compensated Embedded Zero Block Coder (EZBC). The system incorporates unequal loss protection in the form of multiple description FEC (MD-FEC) coding, which provides adequate protection for the embedded video bitstream when the loss process is not very bursty. The adverse effects of burst losses are reduced using a novel motion-compensated error concealmet method.
NASA Astrophysics Data System (ADS)
Mirota, Daniel J.; Uneri, Ali; Schafer, Sebastian; Nithiananthan, Sajendra; Reh, Douglas D.; Gallia, Gary L.; Taylor, Russell H.; Hager, Gregory D.; Siewerdsen, Jeffrey H.
2011-03-01
Registration of endoscopic video to preoperative CT facilitates high-precision surgery of the head, neck, and skull-base. Conventional video-CT registration is limited by the accuracy of the tracker and does not use the underlying video or CT image data. A new image-based video registration method has been developed to overcome the limitations of conventional tracker-based registration. This method adds to a navigation system based on intraoperative C-arm cone-beam CT (CBCT), in turn providing high-accuracy registration of video to the surgical scene. The resulting registration enables visualization of the CBCT and planning data within the endoscopic video. The system incorporates a mobile C-arm, integrated with an optical tracking system, video endoscopy, deformable registration of preoperative CT with intraoperative CBCT, and 3D visualization. Similarly to tracker-based approach, the image-based video-CBCT registration the endoscope is localized with optical tracking system followed by a direct 3D image-based registration of the video to the CBCT. In this way, the system achieves video-CBCT registration that is both fast and accurate. Application in skull-base surgery demonstrates overlay of critical structures (e.g., carotid arteries) and surgical targets with sub-mm accuracy. Phantom and cadaver experiments show consistent improvement of target registration error (TRE) in video overlay over conventional tracker-based registration-e.g., 0.92mm versus 1.82mm for image-based and tracker-based registration, respectively. The proposed method represents a two-fold advance-first, through registration of video to up-to-date intraoperative CBCT, and second, through direct 3D image-based video-CBCT registration, which together provide more confident visualization of target and normal tissues within up-to-date images.
Calibration of Panoramic Cameras with Coded Targets and a 3d Calibration Field
NASA Astrophysics Data System (ADS)
Tommaselli, A. M. G.; Marcato, J., Jr.; Moraes, M. V. A.; Silva, S. L. A.; Artero, A. O.
2014-03-01
The aim of this paper is to present results achieved with a 3D terrestrial calibration field, designed for calibrating digital cameras and omnidirectional sensors. This terrestrial calibration field is composed of 139 ARUCO coded targets. Some experiments were performed using a Nikon D3100 digital camera with 8mm Samyang Bower fisheye lens. The camera was calibrated in this terrestrial test field using a conventional bundle adjustment with the Collinearity and mathematical models specially designed for fisheye lenses. The CMC software (Calibration with Multiple Cameras), developed in-house, was used for the calibration trials. This software was modified to use fisheye models to which the Conrady-Brown distortion equations were added. The target identification and image measurements of its four corners were performed automatically with a public software. Several experiments were performed with 16 images and the results were presented and compared. Besides the calibration of fish-eye cameras, the field was designed for calibration of a catadrioptic system and brief informations on the calibration of this unit will be provided in the paper.
Validation Studies of the Finite Orbit Width version of the CQL3D code
NASA Astrophysics Data System (ADS)
Petrov, Yu. V.; Harvey, R. W.
2014-10-01
The Finite-Orbit-Width (FOW) version of the CQL3D bounce-averaged Fokker-Planck (FP) code has been further developed and tested. The neoclassical radial transport appears naturally in this version by averaging the local collision coefficients along guiding center orbits, with a proper transformation matrix from local (R,Z) coordinates to the midplane computational coordinates, where the FP equation is solved. In a similar way, the local quasilinear rf diffusion terms give rise to additional radial transport of orbits. The main challenge is the internal boundary conditions (IBC) which add many elements into the matrix of coefficients for the solution of FPE on the computational grid, effectively making it a non-banded matrix (but still sparse). Steady state runs have been achieved at NERSC supercomputers in typically 10 time steps. Validation tests are performed for NSTX conditions, but using different scaling factors of equilibrium magnetic field, from 0.5 to 8.0. The bootstrap current calculations for ions show a reasonable agreement of current density profiles with Sauter et al. model equations which are based on 1st order expansion, although the magnitudes of currents may differ by up to 30%. Supported by USDOE grants SC0006614, ER54744, and ER44649.
LINFLUX-AE: A Turbomachinery Aeroelastic Code Based on a 3-D Linearized Euler Solver
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Bakhle, M. A.; Trudell, J. J.; Mehmed, O.; Stefko, G. L.
2004-01-01
This report describes the development and validation of LINFLUX-AE, a turbomachinery aeroelastic code based on the linearized unsteady 3-D Euler solver, LINFLUX. A helical fan with flat plate geometry is selected as the test case for numerical validation. The steady solution required by LINFLUX is obtained from the nonlinear Euler/Navier Stokes solver TURBO-AE. The report briefly describes the salient features of LINFLUX and the details of the aeroelastic extension. The aeroelastic formulation is based on a modal approach. An eigenvalue formulation is used for flutter analysis. The unsteady aerodynamic forces required for flutter are obtained by running LINFLUX for each mode, interblade phase angle and frequency of interest. The unsteady aerodynamic forces for forced response analysis are obtained from LINFLUX for the prescribed excitation, interblade phase angle, and frequency. The forced response amplitude is calculated from the modal summation of the generalized displacements. The unsteady pressures, work done per cycle, eigenvalues and forced response amplitudes obtained from LINFLUX are compared with those obtained from LINSUB, TURBO-AE, ASTROP2, and ANSYS.
BWR ex-vessel steam explosion analysis with MC3D code
Leskovar, M.
2012-07-01
A steam explosion may occur, during a severe reactor accident, when the molten core comes into contact with the coolant water. A strong enough steam explosion in a nuclear power plant could jeopardize the containment integrity and so lead to a direct release of radioactive material to the environment. To resolve the open issues in steam explosion understanding and modeling, the OECD program SERENA phase 2 was launched at the end of year 2007, focusing on reactor applications. To verify the progress made in the understanding and modeling of fuel coolant interaction key phenomena for reactor applications a reactor exercise has been performed. In this paper the BWR ex-vessel steam explosion study, which was carried out with the MC3D code in conditions of the SERENA reactor exercise for the BWR case, is presented and discussed. The premixing simulations were performed with two different jet breakup modeling approaches and the explosion was triggered also at the expected most challenging time. For the most challenging case, at the cavity wall the highest calculated pressure was {approx}20 MPa and the highest pressure impulse was {approx}90 kPa.s. (authors)
Leclère, C; Avril, M; Viaux-Savelon, S; Bodeau, N; Achard, C; Missonnier, S; Keren, M; Feldman, R; Chetouani, M; Cohen, D
2016-01-01
Studying early interaction is essential for understanding development and psychopathology. Automatic computational methods offer the possibility to analyse social signals and behaviours of several partners simultaneously and dynamically. Here, 20 dyads of mothers and their 13-36-month-old infants were videotaped during mother-infant interaction including 10 extremely high-risk and 10 low-risk dyads using two-dimensional (2D) and three-dimensional (3D) sensors. From 2D+3D data and 3D space reconstruction, we extracted individual parameters (quantity of movement and motion activity ratio for each partner) and dyadic parameters related to the dynamics of partners heads distance (contribution to heads distance), to the focus of mutual engagement (percentage of time spent face to face or oriented to the task) and to the dynamics of motion activity (synchrony ratio, overlap ratio, pause ratio). Features are compared with blind global rating of the interaction using the coding interactive behavior (CIB). We found that individual and dyadic parameters of 2D+3D motion features perfectly correlates with rated CIB maternal and dyadic composite scores. Support Vector Machine classification using all 2D-3D motion features classified 100% of the dyads in their group meaning that motion behaviours are sufficient to distinguish high-risk from low-risk dyads. The proposed method may present a promising, low-cost methodology that can uniquely use artificial technology to detect meaningful features of human interactions and may have several implications for studying dyadic behaviours in psychiatry. Combining both global rating scales and computerized methods may enable a continuum of time scale from a summary of entire interactions to second-by-second dynamics. PMID:27219342
Implementation of wall boundary conditions for transpiration in F3D thin-layer Navier-Stokes code
NASA Technical Reports Server (NTRS)
Kandula, M.; Martin, F. W., Jr.
1991-01-01
Numerical boundary conditions for mass injection/suction at the wall are incorporated in the thin-layer Navier-Stokes code, F3D. The accuracy of the boundary conditions and the code is assessed by a detailed comparison of the predictions of velocity distributions and skin-friction coefficients with exact similarity solutions for laminar flow over a flat plate with variable blowing/suction, and measurements for turbulent flow past a flat plate with uniform blowing. In laminar flow, F3D predictions for friction coefficient compare well with exact similarity solution with and without suction, but produces large errors at moderate-to-large values of blowing. A slight Mach number dependence of skin-friction coefficient due to blowing in turbulent flow is computed by F3D code. Predicted surface pressures for turbulent flow past an airfoil with mass injection are in qualitative agreement with measurements for a flat plate.
Selective encryption for H.264/AVC video coding
NASA Astrophysics Data System (ADS)
Shi, Tuo; King, Brian; Salama, Paul
2006-02-01
Due to the ease with which digital data can be manipulated and due to the ongoing advancements that have brought us closer to pervasive computing, the secure delivery of video and images has become a challenging problem. Despite the advantages and opportunities that digital video provide, illegal copying and distribution as well as plagiarism of digital audio, images, and video is still ongoing. In this paper we describe two techniques for securing H.264 coded video streams. The first technique, SEH264Algorithm1, groups the data into the following blocks of data: (1) a block that contains the sequence parameter set and the picture parameter set, (2) a block containing a compressed intra coded frame, (3) a block containing the slice header of a P slice, all the headers of the macroblock within the same P slice, and all the luma and chroma DC coefficients belonging to the all the macroblocks within the same slice, (4) a block containing all the ac coefficients, and (5) a block containing all the motion vectors. The first three are encrypted whereas the last two are not. The second method, SEH264Algorithm2, relies on the use of multiple slices per coded frame. The algorithm searches the compressed video sequence for start codes (0x000001) and then encrypts the next N bits of data.
NASA Astrophysics Data System (ADS)
Meléndez, A.; Korenaga, J.; Sallares, V.; Ranero, C. R.
2012-12-01
We present the development state of tomo3d, a code for three-dimensional refraction and reflection travel-time tomography of wide-angle seismic data based on the previous two-dimensional version of the code, tomo2d. The core of both forward and inverse problems is inherited from the 2-D version. The ray tracing is performed by a hybrid method combining the graph and bending methods. The graph method finds an ordered array of discrete model nodes, which satisfies Fermat's principle, that is, whose corresponding travel time is a global minimum within the space of discrete nodal connections. The bending method is then applied to produce a more accurate ray path by using the nodes as support points for an interpolation with beta-splines. Travel time tomography is formulated as an iterative linearized inversion, and each step is solved using an LSQR algorithm. In order to avoid the singularity of the sensitivity kernel and to reduce the instability of inversion, regularization parameters are introduced in the inversion in the form of smoothing and damping constraints. Velocity models are built as 3-D meshes, and velocity values at intermediate locations are obtained by trilinear interpolation within the corresponding pseudo-cubic cell. Meshes are sheared to account for topographic relief. A floating reflector is represented by a 2-D grid, and depths at intermediate locations are calculated by bilinear interpolation within the corresponding square cell. The trade-off between the resolution of the final model and the associated computational cost is controlled by the relation between the selected forward star for the graph method (i.e. the number of nodes that each node considers as its neighbors) and the refinement of the velocity mesh. Including reflected phases is advantageous because it provides a better coverage and allows us to define the geometry of those geological interfaces with velocity contrasts sharp enough to be observed on record sections. The code also
Displaying 3D radiation dose on endoscopic video for therapeutic assessment and surgical guidance
NASA Astrophysics Data System (ADS)
Qiu, Jimmy; Hope, Andrew J.; Cho, B. C. John; Sharpe, Michael B.; Dickie, Colleen I.; DaCosta, Ralph S.; Jaffray, David A.; Weersink, Robert A.
2012-10-01
We have developed a method to register and display 3D parametric data, in particular radiation dose, on two-dimensional endoscopic images. This registration of radiation dose to endoscopic or optical imaging may be valuable in assessment of normal tissue response to radiation, and visualization of radiated tissues in patients receiving post-radiation surgery. Electromagnetic sensors embedded in a flexible endoscope were used to track the position and orientation of the endoscope allowing registration of 2D endoscopic images to CT volumetric images and radiation doses planned with respect to these images. A surface was rendered from the CT image based on the air/tissue threshold, creating a virtual endoscopic view analogous to the real endoscopic view. Radiation dose at the surface or at known depth below the surface was assigned to each segment of the virtual surface. Dose could be displayed as either a colorwash on this surface or surface isodose lines. By assigning transparency levels to each surface segment based on dose or isoline location, the virtual dose display was overlaid onto the real endoscope image. Spatial accuracy of the dose display was tested using a cylindrical phantom with a treatment plan created for the phantom that matched dose levels with grid lines on the phantom surface. The accuracy of the dose display in these phantoms was 0.8-0.99 mm. To demonstrate clinical feasibility of this approach, the dose display was also tested on clinical data of a patient with laryngeal cancer treated with radiation therapy, with estimated display accuracy of ˜2-3 mm. The utility of the dose display for registration of radiation dose information to the surgical field was further demonstrated in a mock sarcoma case using a leg phantom. With direct overlay of radiation dose on endoscopic imaging, tissue toxicities and tumor response in endoluminal organs can be directly correlated with the actual tissue dose, offering a more nuanced assessment of normal tissue
Conditional entropy coding of DCT coefficients for video compression
NASA Astrophysics Data System (ADS)
Sipitca, Mihai; Gillman, David W.
2000-04-01
We introduce conditional Huffman encoding of DCT run-length events to improve the coding efficiency of low- and medium-bit rate video compression algorithms. We condition the Huffman code for each run-length event on a classification of the current block. We classify blocks according to coding mode and signal type, which are known to the decoder, and according to energy, which the decoder must receive as side information. Our classification schemes improve coding efficiency with little or no increased running time and some increased memory use.
Guillemant, P; Ulmer, E; Freyss, G
1995-01-01
Previous studies have shown the vulnerability of the vestibular system regarding barotraumatism (1) and deep diving may induce immediate neurological changes (2). These extreme conditions (high pressure, limited examination time, restricted space, hydrogen-oxygen mixture, communication difficulties etc.) require adapted technology and associated fast experimental procedure. We were able to solve these problems by developing a new system of 3-D ocular movements on line analysis by means of a video camera. This analyser uses image processing and forms recognition software which allows non-invasive video frequency calculation of eye movements including torsional component. As this system is immediately ready for use, we were able to realize the subsequent examinations in a maximum time of 8 min for each diver: oculomotor tests including saccadic, slow and optokinetic traditional automatic measurements; vestibular tests regarding spontaneous and positional nystagmus, and reactional nystagmus to the pendular test. For pendular induced nystagmus we used appropriate head positions to stimulate separately the lateral and the posterior semicircular canal, and we measured the gain by operating successively in visible light and complete darkness. Recordings were done during a simulated onshore dive to an ambient pressure corresponding to a depth of 350 m. The above examinations were completed on the first and last days by caloric tests with the same video system analyser. The results of the investigations demonstrated perfect tolerance of the oculomotor and vestibular systems of these 4 divers thus fulfilling the preventive conditions defined by Comex Co. We were able to overcome the limitations due to low cost PC computer operation and cameras (necessity of adaptation to pressure, focus difficulties and direct light exposure eye reflexions). We still have on line accurate measurements even on the torsional component of the eye movement. Due to this technological efficiency
Unequal-period combination approach of gray code and phase-shifting for 3-D visual measurement
NASA Astrophysics Data System (ADS)
Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin
2016-09-01
Combination of Gray code and phase-shifting is the most practical and advanced approach for the structured light 3-D measurement so far, which is able to measure objects with complex and discontinuous surface. However, for the traditional combination of the Gray code and phase-shifting, the captured Gray code images are not always sharp cut-off in the black-white conversion boundaries, which may lead to wrong decoding analog code orders. Moreover, during the actual measurement, there also exists local decoding error for the wrapped analog code obtained with the phase-shifting approach. Therefore, for the traditional approach, the wrong analog code orders and the local decoding errors will consequently introduce the errors which are equivalent to a fringe period when the analog code is unwrapped. In order to avoid one-fringe period errors, we propose an approach which combines Gray code with phase-shifting according to unequal period. With theoretical analysis, we build the measurement model of the proposed approach, determine the applicable condition and optimize the Gray code encoding period and phase-shifting fringe period. The experimental results verify that the proposed approach can offer a reliable unwrapped analog code, which can be used in 3-D shape measurement.
Fast Mode Decision for 3D-HEVC Depth Intracoding
Li, Nana; Wu, Qinggang
2014-01-01
The emerging international standard of high efficiency video coding based 3D video coding (3D-HEVC) is a successor to multiview video coding (MVC). In 3D-HEVC depth intracoding, depth modeling mode (DMM) and high efficiency video coding (HEVC) intraprediction mode are both employed to select the best coding mode for each coding unit (CU). This technique achieves the highest possible coding efficiency, but it results in extremely large encoding time which obstructs the 3D-HEVC from practical application. In this paper, a fast mode decision algorithm based on the correlation between texture video and depth map is proposed to reduce 3D-HEVC depth intracoding computational complexity. Since the texture video and its associated depth map represent the same scene, there is a high correlation among the prediction mode from texture video and depth map. Therefore, we can skip some specific depth intraprediction modes rarely used in related texture CU. Experimental results show that the proposed algorithm can significantly reduce computational complexity of 3D-HEVC depth intracoding while maintaining coding efficiency. PMID:24963512
Practical distributed video coding in packet lossy channels
NASA Astrophysics Data System (ADS)
Qing, Linbo; Masala, Enrico; He, Xiaohai
2013-07-01
Improving error resilience of video communications over packet lossy channels is an important and tough task. We present a framework to optimize the quality of video communications based on distributed video coding (DVC) in practical packet lossy network scenarios. The peculiar characteristics of DVC indeed require a number of adaptations to take full advantage of its intrinsic robustness when dealing with data losses of typical real packet networks. This work proposes a new packetization scheme, an investigation of the best error-correcting codes to use in a noisy environment, a practical rate-allocation mechanism, which minimizes decoder feedback, and an improved side-information generation and reconstruction function. Performance comparisons are presented with respect to a conventional packet video communication using H.264/advanced video coding (AVC). Although currently the H.264/AVC rate-distortion performance in case of no loss is better than state-of-the-art DVC schemes, under practical packet lossy conditions, the proposed techniques provide better performance with respect to an H.264/AVC-based system, especially at high packet loss rates. Thus the error resilience of the proposed DVC scheme is superior to the one provided by H.264/AVC, especially in the case of transmission over packet lossy networks.
Meyer, Michael J; Lapcevic, Ryan; Romero, Alfonso E; Yoon, Mark; Das, Jishnu; Beltrán, Juan Felipe; Mort, Matthew; Stenson, Peter D; Cooper, David N; Paccanaro, Alberto; Yu, Haiyuan
2016-05-01
A new algorithm and Web server, mutation3D (http://mutation3d.org), proposes driver genes in cancer by identifying clusters of amino acid substitutions within tertiary protein structures. We demonstrate the feasibility of using a 3D clustering approach to implicate proteins in cancer based on explorations of single proteins using the mutation3D Web interface. On a large scale, we show that clustering with mutation3D is able to separate functional from nonfunctional mutations by analyzing a combination of 8,869 known inherited disease mutations and 2,004 SNPs overlaid together upon the same sets of crystal structures and homology models. Further, we present a systematic analysis of whole-genome and whole-exome cancer datasets to demonstrate that mutation3D identifies many known cancer genes as well as previously underexplored target genes. The mutation3D Web interface allows users to analyze their own mutation data in a variety of popular formats and provides seamless access to explore mutation clusters derived from over 975,000 somatic mutations reported by 6,811 cancer sequencing studies. The mutation3D Web interface is freely available with all major browsers supported. PMID:26841357
A Robust Model-Based Coding Technique for Ultrasound Video
NASA Technical Reports Server (NTRS)
Docef, Alen; Smith, Mark J. T.
1995-01-01
This paper introduces a new approach to coding ultrasound video, the intended application being very low bit rate coding for transmission over low cost phone lines. The method exploits both the characteristic noise and the quasi-periodic nature of the signal. Data compression ratios between 250:1 and 1000:1 are shown to be possible, which is sufficient for transmission over ISDN and conventional phone lines. Preliminary results show this approach to be promising for remote ultrasound examinations.
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2012-01-01
This paper presents the implementation of gust modeling capability in the CFD code FUN3D. The gust capability is verified by computing the response of an airfoil to a sharp edged gust. This result is compared with the theoretical result. The present simulations will be compared with other CFD gust simulations. This paper also serves as a users manual for FUN3D gust analyses using a variety of gust profiles. Finally, the development of an Auto-Regressive Moving-Average (ARMA) reduced order gust model using a gust with a Gaussian profile in the FUN3D code is presented. ARMA simulated results of a sequence of one-minus-cosine gusts is shown to compare well with the same gust profile computed with FUN3D. Proper Orthogonal Decomposition (POD) is combined with the ARMA modeling technique to predict the time varying pressure coefficient increment distribution due to a novel gust profile. The aeroelastic response of a pitch/plunge airfoil to a gust environment is computed with a reduced order model, and compared with a direct simulation of the system in the FUN3D code. The two results are found to agree very well.
NASA Astrophysics Data System (ADS)
Sanchez, Gustavo; Saldanha, Mário; Balota, Gabriel; Zatt, Bruno; Porto, Marcelo; Agostini, Luciano
2015-03-01
We present a complexity reduction scheme for the depth map intraprediction of three-dimensional high-efficiency video coding (3-D-HEVC). The 3-D-HEVC introduces a new set of specific tools for depth map coding, inserting additional complexity to intraprediction, which results in new challenges in terms of complexity reduction. Therefore, we present the DMMFast (depth modeling modes fast prediction), a scheme composed of two new algorithms: the simplified edge detector (SED) and the gradient-based mode one filter (GMOF). The SED anticipates the blocks that are likely to be better predicted by the traditional intramodes, avoiding the evaluation of DMMs. The GMOF applies a gradient-based filter in the borders of the block and predicts the best positions to evaluate the DMM 1. Software evaluations showed that DMMFast is capable of achieving a time saving of 11.9% on depth map intraprediction, considering the random access mode, without affecting the quality of the synthesized views. Considering the all intraconfigurations, the proposed scheme is capable of achieving, on average, a time saving of 35% considering the whole encoder. Subjective quality assessment was also performed, showing that the proposed technique inserts minimal quality losses in the final encoded video.
Template based illumination compensation algorithm for multiview video coding
NASA Astrophysics Data System (ADS)
Li, Xiaoming; Jiang, Lianlian; Ma, Siwei; Zhao, Debin; Gao, Wen
2010-07-01
Recently multiview video coding (MVC) standard has been finalized as an extension of H.264/AVC by Joint Video Team (JVT). In the project Joint Multiview Video Model (JMVM) for the standardization, illumination compensation (IC) is adopted as a useful tool. In this paper, a novel illumination compensation algorithm based on template is proposed. The basic idea of the algorithm is that the illumination of the current block has a strong correlation with its adjacent template. Based on this idea, firstly a template based illumination compensation method is presented, and then a template models selection strategy is devised to improve the illumination compensation performance. The experimental results show that the proposed algorithm can improve the coding efficiency significantly.
NASA Technical Reports Server (NTRS)
Meyer, Harold D.
1999-01-01
This second volume of Acoustic Scattering by Three-Dimensional Stators and Rotors Using the SOURCE3D Code provides the scattering plots referenced by Volume 1. There are 648 plots. Half are for the 8750 rpm "high speed" operating condition and the other half are for the 7031 rpm "mid speed" operating condition.
NASA Astrophysics Data System (ADS)
Meléndez, A.; Korenaga, J.; Sallarès, V.; Ranero, C. R.
2012-04-01
We present the development state of tomo3d, a code for three-dimensional refraction and reflection travel-time tomography of wide-angle seismic data based on the previous two-dimensional version of the code, tomo2d. The core of both forward and inverse problems is inherited from the 2-D version. The ray tracing is performed by a hybrid method combining the graph and bending methods. The graph method finds an ordered array of discrete model nodes, which satisfies Fermat's principle, that is, whose corresponding travel time is a global minimum within the space of discrete nodal connections. The bending method is then applied to produce a more accurate ray path by using the nodes as support points for an interpolation with beta-splines. Travel time tomography is formulated as an iterative linearized inversion, and each step is solved using an LSQR algorithm. In order to avoid the singularity of the sensitivity kernel and to reduce the instability of inversion, regularization parameters are introduced in the inversion in the form of smoothing and damping constraints. Velocity models are built as 3-D meshes, and velocity values at intermediate locations are obtained by trilinear interpolation within the corresponding pseudo-cubic cell. Meshes are sheared to account for topographic relief. A floating reflector is represented by a 2-D grid, and depths at intermediate locations are calculated by bilinear interpolation within the corresponding square cell. The trade-off between the resolution of the final model and the associated computational cost is controlled by the relation between the selected forward star for the graph method (i.e. the number of nodes that each node considers as its neighbors) and the refinement of the velocity mesh. Including reflected phases is advantageous because it provides a better coverage and allows us to define the geometry of those geological interfaces with velocity contrasts sharp enough to be observed on record sections. The code also
Low complexity video coding using SMPTE VC-2
NASA Astrophysics Data System (ADS)
Borer, Tim
2013-09-01
Low complexity video coding addresses different applications, and is complementary to, video coding for delivery to the end user. Delivery codecs, such as the MPEG/ITU standards, provide very high compression ratios, but require high complexity and high latency. Some applications, by contrast, need the opposite characteristics of low complexity and low latency at low compression ratios. This paper discusses the applications and requirements of low complexity coding and, after discussing the prior art, describes the standard VC-2 (SMPTE 2042) codec, which is a wavelet codec designed for low complexity and ultra-low latency. VC-2 provides a wide range of coding parameters and compression ratios, allowing it to address applications such as texture coding, lossless and high dynamic range coding. In particular this paper describes the results for the low complexity coding parameters of 2 and 3 level Haar and LeGall wavelet kernels, for image regions of 4x4 and 8x8 pixels with both luma/color difference signals and RGB. The paper indicates the quality that may be achieved at various compression ratios and also clearly shows the benefit of coding luma and color components rather than RGB.
Modeling and Analysis of a Lunar Space Reactor with the Computer Code RELAP5-3D/ATHENA
Carbajo, Juan J; Qualls, A L
2008-01-01
The transient analysis 3-dimensional (3-D) computer code RELAP5-3D/ATHENA has been employed to model and analyze a space reactor of 180 kW(thermal), 40 kW (net, electrical) with eight Stirling engines (SEs). Each SE will generate over 6 kWe; the excess power will be needed for the pumps and other power management devices. The reactor will be cooled by NaK (a eutectic mixture of sodium and potassium which is liquid at ambient temperature). This space reactor is intended to be deployed over the surface of the Moon or Mars. The reactor operating life will be 8 to 10 years. The RELAP5-3D/ATHENA code is being developed and maintained by Idaho National Laboratory. The code can employ a variety of coolants in addition to water, the original coolant employed with early versions of the code. The code can also use 3-D volumes and 3-D junctions, thus allowing for more realistic representation of complex geometries. A combination of 3-D and 1-D volumes is employed in this study. The space reactor model consists of a primary loop and two secondary loops connected by two heat exchangers (HXs). Each secondary loop provides heat to four SEs. The primary loop includes the nuclear reactor with the lower and upper plena, the core with 85 fuel pins, and two vertical heat exchangers (HX). The maximum coolant temperature of the primary loop is 900 K. The secondary loops also employ NaK as a coolant at a maximum temperature of 877 K. The SEs heads are at a temperature of 800 K and the cold sinks are at a temperature of ~400 K. Two radiators will be employed to remove heat from the SEs. The SE HXs surrounding the SE heads are of annular design and have been modeled using 3-D volumes. These 3-D models have been used to improve the HX design by optimizing the flows of coolant and maximizing the heat transferred to the SE heads. The transients analyzed include failure of one or more Stirling engines, trip of the reactor pump, and trips of the secondary loop pumps feeding the HXs of the
NASA Astrophysics Data System (ADS)
Miensopust, Marion P.; Queralt, Pilar; Jones, Alan G.; 3D MT modellers
2013-06-01
Over the last half decade the need for, and importance of, three-dimensional (3-D) modelling of magnetotelluric (MT) data have increased dramatically and various 3-D forward and inversion codes are in use and some have become commonly available. Comparison of forward responses and inversion results is an important step for code testing and validation prior to `production' use. The various codes use different mathematical approximations to the problem (finite differences, finite elements or integral equations), various orientations of the coordinate system, different sign conventions for the time dependence and various inversion strategies. Additionally, the obtained results are dependent on data analysis, selection and correction as well as on the chosen mesh, inversion parameters and regularization adopted, and therefore, a careful and knowledge-based use of the codes is essential. In 2008 and 2011, during two workshops at the Dublin Institute for Advanced Studies over 40 people from academia (scientists and students) and industry from around the world met to discuss 3-D MT inversion. These workshops brought together a mix of code writers as well as code users to assess the current status of 3-D modelling, to compare the results of different codes, and to discuss and think about future improvements and new aims in 3-D modelling. To test the numerical forward solutions, two 3-D models were designed to compare the responses obtained by different codes and/or users. Furthermore, inversion results of these two data sets and two additional data sets obtained from unknown models (secret models) were also compared. In this manuscript the test models and data sets are described (supplementary files are available) and comparisons of the results are shown. Details regarding the used data, forward and inversion parameters as well as computational power are summarized for each case, and the main discussion points of the workshops are reviewed. In general, the responses
Layered Low-Density Generator Matrix Codes for Super High Definition Scalable Video Coding System
NASA Astrophysics Data System (ADS)
Tonomura, Yoshihide; Shirai, Daisuke; Nakachi, Takayuki; Fujii, Tatsuya; Kiya, Hitoshi
In this paper, we introduce layered low-density generator matrix (Layered-LDGM) codes for super high definition (SHD) scalable video systems. The layered-LDGM codes maintain the correspondence relationship of each layer from the encoder side to the decoder side. This resulting structure supports partial decoding. Furthermore, the proposed layered-LDGM codes create highly efficient forward error correcting (FEC) data by considering the relationship between each scalable component. Therefore, the proposed layered-LDGM codes raise the probability of restoring the important components. Simulations show that the proposed layered-LDGM codes offer better error resiliency than the existing method which creates FEC data for each scalable component independently. The proposed layered-LDGM codes support partial decoding and raise the probability of restoring the base component. These characteristics are very suitable for scalable video coding systems.
Fullwave coupling to a 3D antenna code using Green's function formulation of wave-particle response
NASA Astrophysics Data System (ADS)
Wright, John; Bonoli, P. T.; Bilato, R.; Brambilla, M.; Maggiora, R.; Lancellotti, V.
2006-10-01
Using the fullwave code, TORIC, and the 3D antenna code, TOPICA, we construct a complete linear system for the RF driven plasma. The 3D finite element antenna code, TOPICA, requires an admittance, Y, for the plasma, where B=YE. In this work, TORIC was modified to allow excitation of the (Eη, Eζ) electric field components at the plasma surface, corresponding to a single poloidal and toroidal mode number combination (m,n). This leads to the tensor response: Yn= ( ll Yηη& YηζYζη& Yζζ), where each of the Yn submatrices is Nm in size. It is shown that the admittance matrix is equivalent to a Green's function calculation for the fullwave system and the net work done is less than twice a single fullwave calculation. The admittance calculation is used with loading calculation from TOPICA to construct self consistent plasma and antenna currents.
NASA Astrophysics Data System (ADS)
Ceccuzzi, Silvio; Maggiora, Riccardo; Milanesio, Daniele; Mirizzi, Francesco; Panaccione, Luigi
2011-12-01
The present work compares and experimentally validates the results coming out from the following three Lower Hybrid (LH) coupling codes: Brambilla code (M. Brambilla), GRILL3D-U (Mikhail Irzak, A. F. Ioffe Physico-Technical Institute, Russia) and TOPLHA (Politecnico di Torino, Italy). The conventional grill antenna, operating in FTU in different scenarios, is used as benchmark. The validation with experimental data is carried out with respect to the average reflection coefficients at the input of a row of the grill, considering two different phasings between adjacent waveguides: -90 ° and -75 °. A comparison between calculated power spectra is also presented. Good agreement can be observed for all the simulated plasma profiles and waveguide phasings between experimental data and codes, in particular for the most recent numerical tools, namely GRILL3D-U and TOPLHA.
J. D. Hales; D. M. Perez; R. L. Williamson; S. R. Novascone; B. W. Spencer
2013-03-01
BISON is a modern finite-element based nuclear fuel performance code that has been under development at the Idaho National Laboratory (USA) since 2009. The code is applicable to both steady and transient fuel behaviour and is used to analyse either 2D axisymmetric or 3D geometries. BISON has been applied to a variety of fuel forms including LWR fuel rods, TRISO-coated fuel particles, and metallic fuel in both rod and plate geometries. Code validation is currently in progress, principally by comparison to instrumented LWR fuel rods. Halden IFA experiments constitute a large percentage of the current BISON validation base. The validation emphasis here is centreline temperatures at the beginning of fuel life, with comparisons made to seven rods from the IFA-431 and 432 assemblies. The principal focus is IFA-431 Rod 4, which included concentric and eccentrically located fuel pellets. This experiment provides an opportunity to explore 3D thermomechanical behaviour and assess the 3D simulation capabilities of BISON. Analysis results agree with experimental results showing lower fuel centreline temperatures for eccentric fuel with the peak temperature shifted from the centreline. The comparison confirms with modern 3D analysis tools that the measured temperature difference between concentric and eccentric pellets is not an artefact and provides a quantitative explanation for the difference.
2D virtual texture on 3D real object with coded structured light
NASA Astrophysics Data System (ADS)
Molinier, Thierry; Fofi, David; Salvi, Joaquim; Gorria, Patrick
2008-02-01
Augmented reality is used to improve color segmentation on human body or on precious no touch artifacts. We propose a technique to project a synthesized texture on real object without contact. Our technique can be used in medical or archaeological application. By projecting a suitable set of light patterns onto the surface of a 3D real object and by capturing images with a camera, a large number of correspondences can be found and the 3D points can be reconstructed. We aim to determine these points of correspondence between cameras and projector from a scene without explicit points and normals. We then project an adjusted texture onto the real object surface. We propose a global and automatic method to virtually texture a 3D real object.
Dynamic algorithm for correlation noise estimation in distributed video coding
NASA Astrophysics Data System (ADS)
Thambu, Kuganeswaran; Fernando, Xavier; Guan, Ling
2010-01-01
Low complexity encoders at the expense of high complexity decoders are advantageous in wireless video sensor networks. Distributed video coding (DVC) achieves the above complexity balance, where the receivers compute Side information (SI) by interpolating the key frames. Side information is modeled as a noisy version of input video frame. In practise, correlation noise estimation at the receiver is a complex problem, and currently the noise is estimated based on a residual variance between pixels of the key frames. Then the estimated (fixed) variance is used to calculate the bit-metric values. In this paper, we have introduced the new variance estimation technique that rely on the bit pattern of each pixel, and it is dynamically calculated over the entire motion environment which helps to calculate the soft-value information required by the decoder. Our result shows that the proposed bit based dynamic variance estimation significantly improves the peak signal to noise ratio (PSNR) performance.
A Watermarking Scheme for High Efficiency Video Coding (HEVC)
Swati, Salahuddin; Hayat, Khizar; Shahid, Zafar
2014-01-01
This paper presents a high payload watermarking scheme for High Efficiency Video Coding (HEVC). HEVC is an emerging video compression standard that provides better compression performance as compared to its predecessor, i.e. H.264/AVC. Considering that HEVC may will be used in a variety of applications in the future, the proposed algorithm has a high potential of utilization in applications involving broadcast and hiding of metadata. The watermark is embedded into the Quantized Transform Coefficients (QTCs) during the encoding process. Later, during the decoding process, the embedded message can be detected and extracted completely. The experimental results show that the proposed algorithm does not significantly affect the video quality, nor does it escalate the bitrate. PMID:25144455
A robust low-rate coding scheme for packet video
NASA Technical Reports Server (NTRS)
Chen, Y. C.; Sayood, Khalid; Nelson, D. J.; Arikan, E. (Editor)
1991-01-01
Due to the rapidly evolving field of image processing and networking, video information promises to be an important part of telecommunication systems. Although up to now video transmission has been transported mainly over circuit-switched networks, it is likely that packet-switched networks will dominate the communication world in the near future. Asynchronous transfer mode (ATM) techniques in broadband-ISDN can provide a flexible, independent and high performance environment for video communication. For this paper, the network simulator was used only as a channel in this simulation. Mixture blocking coding with progressive transmission (MBCPT) has been investigated for use over packet networks and has been found to provide high compression rate with good visual performance, robustness to packet loss, tractable integration with network mechanics and simplicity in parallel implementation.
TART97 a coupled neutron-photon 3-D, combinatorial geometry Monte Carlo transport code
Cullen, D.E.
1997-11-22
TART97 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART97 is distributed on CD. This CD contains on- line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and its data riles.
Simulations of 3D LPI's relevant to IFE using the PIC code OSIRIS
NASA Astrophysics Data System (ADS)
Tsung, F. S.; Mori, W. B.; Winjum, B. J.
2014-10-01
We will study three dimensional effects of laser plasma instabilities, including backward raman scattering, the high frequency hybrid instability, and the two plasmon instability using OSIRIS in 3D Cartesian geometry and cylindrical 2D OSIRIS with azimuthal mode decompositions. With our new capabilities we hope to demonstrate that we are capable of studying single speckle physics relevant to IFE in an efficent manner.
Recent Hydrodynamics Improvements to the RELAP5-3D Code
Richard A. Riemke; Cliff B. Davis; Richard.R. Schultz
2009-07-01
The hydrodynamics section of the RELAP5-3D computer program has been recently improved. Changes were made as follows: (1) improved turbine model, (2) spray model for the pressurizer model, (3) feedwater heater model, (4) radiological transport model, (5) improved pump model, and (6) compressor model.
Finite Element Code For 3D-Hydraulic Fracture Propagation Equations (3-layer).
1992-03-24
HYFRACP3D is a finite element program for simulation of a pseudo three-dimensional fracture geometries with a two-dimensional planar solution. The model predicts the height, width and winglength over time for a hydraulic fracture propagating in a three-layered system of rocks with variable rock mechanics properties.
Robust video transmission with distributed source coded auxiliary channel.
Wang, Jiajun; Majumdar, Abhik; Ramchandran, Kannan
2009-12-01
We propose a novel solution to the problem of robust, low-latency video transmission over lossy channels. Predictive video codecs, such as MPEG and H.26x, are very susceptible to prediction mismatch between encoder and decoder or "drift" when there are packet losses. These mismatches lead to a significant degradation in the decoded quality. To address this problem, we propose an auxiliary codec system that sends additional information alongside an MPEG or H.26x compressed video stream to correct for errors in decoded frames and mitigate drift. The proposed system is based on the principles of distributed source coding and uses the (possibly erroneous) MPEG/H.26x decoder reconstruction as side information at the auxiliary decoder. The distributed source coding framework depends upon knowing the statistical dependency (or correlation) between the source and the side information. We propose a recursive algorithm to analytically track the correlation between the original source frame and the erroneous MPEG/H.26x decoded frame. Finally, we propose a rate-distortion optimization scheme to allocate the rate used by the auxiliary encoder among the encoding blocks within a video frame. We implement the proposed system and present extensive simulation results that demonstrate significant gains in performance both visually and objectively (on the order of 2 dB in PSNR over forward error correction based solutions and 1.5 dB in PSNR over intrarefresh based solutions for typical scenarios) under tight latency constraints. PMID:19703801
Reactor Dosimetry Applications Using RAPTOR-M3G:. a New Parallel 3-D Radiation Transport Code
NASA Astrophysics Data System (ADS)
Longoni, Gianluca; Anderson, Stanwood L.
2009-08-01
The numerical solution of the Linearized Boltzmann Equation (LBE) via the Discrete Ordinates method (SN) requires extensive computational resources for large 3-D neutron and gamma transport applications due to the concurrent discretization of the angular, spatial, and energy domains. This paper will discuss the development RAPTOR-M3G (RApid Parallel Transport Of Radiation - Multiple 3D Geometries), a new 3-D parallel radiation transport code, and its application to the calculation of ex-vessel neutron dosimetry responses in the cavity of a commercial 2-loop Pressurized Water Reactor (PWR). RAPTOR-M3G is based domain decomposition algorithms, where the spatial and angular domains are allocated and processed on multi-processor computer architectures. As compared to traditional single-processor applications, this approach reduces the computational load as well as the memory requirement per processor, yielding an efficient solution methodology for large 3-D problems. Measured neutron dosimetry responses in the reactor cavity air gap will be compared to the RAPTOR-M3G predictions. This paper is organized as follows: Section 1 discusses the RAPTOR-M3G methodology; Section 2 describes the 2-loop PWR model and the numerical results obtained. Section 3 addresses the parallel performance of the code, and Section 4 concludes this paper with final remarks and future work.
Motion Information Inferring Scheme for Multi-View Video Coding
NASA Astrophysics Data System (ADS)
Koo, Han-Suh; Jeon, Yong-Joon; Jeon, Byeong-Moon
This letter proposes a motion information inferring scheme for multi-view video coding motivated by the idea that the aspect of motion vector between the corresponding positions in the neighboring view pair is quite similar. The proposed method infers the motion information from the corresponding macroblock in the neighboring view after RD optimization with the existing prediction modes. This letter presents evaluation showing that the method significantly enhances the efficiency especially at high bit rates.
Huang, Junhui; Xue, Qi; Wang, Zhao; Gao, Jianmin
2016-01-01
While color-coding methods have improved the measuring efficiency of a structured light three-dimensional (3D) measurement system, they decreased the measuring accuracy significantly due to lateral chromatic aberration (LCA). In this study, the LCA in a structured light measurement system is analyzed, and a method is proposed to compensate the error caused by the LCA. Firstly, based on the projective transformation, a 3D error map of LCA is constructed in the projector images by using a flat board and comparing the image coordinates of red, green and blue circles with the coordinates of white circles at preselected sample points within the measurement volume. The 3D map consists of the errors, which are the equivalent errors caused by LCA of the camera and projector. Then in measurements, error values of LCA are calculated and compensated to correct the projector image coordinates through the 3D error map and a tri-linear interpolation method. Eventually, 3D coordinates with higher accuracy are re-calculated according to the compensated image coordinates. The effectiveness of the proposed method is verified in the following experiments. PMID:27598174
Preliminary Results from the Application of Automated Adjoint Code Generation to CFL3D
NASA Technical Reports Server (NTRS)
Carle, Alan; Fagan, Mike; Green, Lawrence L.
1998-01-01
This report describes preliminary results obtained using an automated adjoint code generator for Fortran to augment a widely-used computational fluid dynamics flow solver to compute derivatives. These preliminary results with this augmented code suggest that, even in its infancy, the automated adjoint code generator can accurately and efficiently deliver derivatives for use in transonic Euler-based aerodynamic shape optimization problems with hundreds to thousands of independent design variables.
NASA Astrophysics Data System (ADS)
Medley, S. S.; Liu, D.; Gorelenkova, M. V.; Heidbrink, W. W.; Stagner, L.
2016-02-01
A 3D halo neutral code developed at the Princeton Plasma Physics Laboratory and implemented for analysis using the TRANSP code is applied to projected National Spherical Torus eXperiment-Upgrade (NSTX-U plasmas). The legacy TRANSP code did not handle halo neutrals properly since they were distributed over the plasma volume rather than remaining in the vicinity of the neutral beam footprint as is actually the case. The 3D halo neutral code uses a ‘beam-in-a-box’ model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components produce first generation halo neutrals that are tracked through successive generations until an ionization event occurs or the descendant halos exit the box. The 3D halo neutral model and neutral particle analyzer (NPA) simulator in the TRANSP code have been benchmarked with the Fast-Ion D-Alpha simulation (FIDAsim) code, which provides Monte Carlo simulations of beam neutral injection, attenuation, halo generation, halo spatial diffusion, and photoemission processes. When using the same atomic physics database, TRANSP and FIDAsim simulations achieve excellent agreement on the spatial profile and magnitude of beam and halo neutral densities and the NPA energy spectrum. The simulations show that the halo neutral density can be comparable to the beam neutral density. These halo neutrals can double the NPA flux, but they have minor effects on the NPA energy spectrum shape. The TRANSP and FIDAsim simulations also suggest that the magnitudes of beam and halo neutral densities are relatively sensitive to the choice of the atomic physics databases.
Picturewise inter-view prediction selection for multiview video coding
NASA Astrophysics Data System (ADS)
Huo, Junyan; Chang, Yilin; Li, Ming; Yang, Haitao
2010-11-01
Inter-view prediction is introduced in multiview video coding (MVC) to exploit the inter-view correlation. Statistical analyses show that the coding gain benefited from inter-view prediction is unequal among pictures. On the basis of this observation, a picturewise interview prediction selection scheme is proposed. This scheme employs a novel inter-view prediction selection criterion to determine whether it is necessary to apply inter-view prediction to the current coding picture. This criterion is derived from the available coding information of the temporal reference pictures. Experimental results show that the proposed scheme can improve the performance of MVC with a comprehensive consideration of compression efficiency, computational complexity, and random access ability.
Distributed Coding/Decoding Complexity in Video Sensor Networks
Cordeiro, Paulo J.; Assunção, Pedro
2012-01-01
Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality. PMID:22736972
Improving Intra Prediction in High-Efficiency Video Coding.
Chen, Haoming; Zhang, Tao; Sun, Ming-Ting; Saxena, Ankur; Budagavi, Madhukar
2016-08-01
Intra prediction is an important tool in intra-frame video coding to reduce the spatial redundancy. In current coding standard H.265/high-efficiency video coding (HEVC), a copying-based method based on the boundary (or interpolated boundary) reference pixels is used to predict each pixel in the coding block to remove the spatial redundancy. We find that the conventional copying-based method can be further improved in two cases: 1) the boundary has an inhomogeneous region and 2) the predicted pixel is far away from the boundary that the correlation between the predicted pixel and the reference pixels is relatively weak. This paper performs a theoretical analysis of the optimal weights based on a first-order Gaussian Markov model and the effects when the pixel values deviate from the model and the predicted pixel is far away from the reference pixels. It also proposes a novel intra prediction scheme based on the analysis that smoothing the copying-based prediction can derive a better prediction block. Both the theoretical analysis and the experimental results show the effectiveness of the proposed intra prediction method. An average gain of 2.3% on all intra coding can be achieved with the HEVC reference software. PMID:27249831
Implementation of a 3D mixing layer code on parallel computers
NASA Technical Reports Server (NTRS)
Roe, K.; Thakur, R.; Dang, T.; Bogucz, E.
1995-01-01
This paper summarizes our progress and experience in the development of a Computational-Fluid-Dynamics code on parallel computers to simulate three-dimensional spatially-developing mixing layers. In this initial study, the three-dimensional time-dependent Euler equations are solved using a finite-volume explicit time-marching algorithm. The code was first programmed in Fortran 77 for sequential computers. The code was then converted for use on parallel computers using the conventional message-passing technique, while we have not been able to compile the code with the present version of HPF compilers.
Assessment of 3D Codes for Predicting Liner Attenuation in Flow Ducts
NASA Technical Reports Server (NTRS)
Watson, W. R.; Nark, D. M.; Jones, M. G.
2008-01-01
This paper presents comparisons of seven propagation codes for predicting liner attenuation in ducts with flow. The selected codes span the spectrum of methods available (finite element, parabolic approximation, and pseudo-time domain) and are collectively representative of the state-of-art in the liner industry. These codes are included because they have two-dimensional and three-dimensional versions and can be exported to NASA's Columbia Supercomputer. The basic assumptions, governing differential equations, boundary conditions, and numerical methods underlying each code are briefly reviewed and an assessment is performed based on two predefined metrics. The two metrics used in the assessment are the accuracy of the predicted attenuation and the amount of wall clock time to predict the attenuation. The assessment is performed over a range of frequencies, mean flow rates, and grazing flow liner impedances commonly used in the liner industry. The primary conclusions of the study are (1) predicted attenuations are in good agreement for rigid wall ducts, (2) the majority of codes compare well to each other and to approximate results from mode theory for soft wall ducts, (3) most codes compare well to measured data on a statistical basis, (4) only the finite element codes with cubic Hermite polynomials capture extremely large attenuations, and (5) wall clock time increases by an order of magnitude or more are observed for a three-dimensional code relative to the corresponding two-dimensional version of the same code.
Gagner, Renata; Lafitte, Helene; Dormeau, Pascal; Stoudt, Roger H.
2004-07-01
Anticipated Transients Without Scram (ATWS) accident analyses make part of the Safety Analysis Report of the European Pressurized water Reactor (EPR), covering Risk Reduction Category A (Core Melt Prevention) events. This paper deals with three of the most penalizing RRC-A sequences of ATWS caused by mechanical blockage of the control/shutdown rods, regarding their consequences on the Reactor Coolant System (RCS) and core integrity. A new 3D code internal coupling calculation method has been introduced. (authors)
Finite Orbit Width versions of the CQL3D code: Hybrid-FOW and Full-FOW
NASA Astrophysics Data System (ADS)
Petrov, Yu. V.; Harvey, R. W.
2012-10-01
Finite-Orbit-Width (FOW) effects are being added into the CQL3D bounce-averaged Fokker-Planck code [1] using two main options. In the Hybrid-FOW option, partial FOW capabilities are implemented which add FOW features into the particle source (NB) operator, RF quasilinear operator, diagnostics, and guiding center orbit losses with gyro-radius correction. Collisions remain Zero-Orbit-Width (ZOW). The Hybrid-FOW version provides a greatly improved agreement with signals measured by the NSTX Fast Ion Diagnostic [2]. The advantage of the Hybrid-FOW version is that run time increases by only a factor of two compared to ZOW runs. The Full-FOW option further adds neoclassical radial transport features into the FP coding. The collisional coefficients are averaged along guiding center orbits, with a proper transformation matrix from local coordinates to the midplane coordinates, where the FP equation is solved. All radial terms are included. The computations are parallelized in velocity-grid index, typically using 128 CPU cores. We emphasize that this theory includes nonthermal and full-orbit, not first order correction, neoclassical theory. [4pt] [1] R.W. Harvey and M. McCoy, ``The CQL3D Fokker Planck Code,'' www.compxco.com/cql3d [0pt] [2] R.W. Harvey, Yu. Petrov, D. Liu, W. Heidbrink, P. Bonoli, this mtg (2012)
NASA Astrophysics Data System (ADS)
Woodbury, D.; Kubota, S.; Johnson, I.
2014-10-01
Computer simulations of electromagnetic wave propagation in magnetized plasmas are an important tool for both plasma heating and diagnostics. For active millimeter-wave and microwave diagnostics, accurately modeling the evolution of the beam parameters for launched, reflected or scattered waves in a toroidal plasma requires that calculations be done using the full 3-D geometry. Previously, we reported on the application of GPGPU (General-Purpose computing on Graphics Processing Units) to a 3-D vacuum Maxwell code using the FDTD (Finite-Difference Time-Domain) method. Tests were done for Gaussian beam propagation with a hard source antenna, utilizing the parallel processing capabilities of the NVIDIA K20M. In the current study, we have modified the 3-D code to include a soft source antenna and an induced current density based on the cold plasma approximation. Results from Gaussian beam propagation in an inhomogeneous anisotropic plasma, along with comparisons to ray- and beam-tracing calculations will be presented. Additional enhancements, such as advanced coding techniques for improved speedup, will also be investigated. Supported by U.S. DoE Grant DE-FG02-99-ER54527 and in part by the U.S. DoE, Office of Science, WDTS under the Science Undergraduate Laboratory Internship program.
ERIC Educational Resources Information Center
Sack, Jacqueline J.
2013-01-01
This article explicates the development of top-view numeric coding of 3-D cube structures within a design research project focused on 3-D visualization skills for elementary grades children. It describes children's conceptual development of 3-D cube structures using concrete models, conventional 2-D pictures and abstract top-view numeric…
The Monte Carlo SRNA-VOX code for 3D proton dose distribution in voxelized geometry using CT data
NASA Astrophysics Data System (ADS)
Ilic, Radovan D.; Spasic-Jokic, Vesna; Belicev, Petar; Dragovic, Milos
2005-03-01
This paper describes the application of the SRNA Monte Carlo package for proton transport simulations in complex geometry and different material compositions. The SRNA package was developed for 3D dose distribution calculation in proton therapy and dosimetry and it was based on the theory of multiple scattering. The decay of proton induced compound nuclei was simulated by the Russian MSDM model and our own using ICRU 63 data. The developed package consists of two codes: the SRNA-2KG, which simulates proton transport in combinatorial geometry and the SRNA-VOX, which uses the voxelized geometry using the CT data and conversion of the Hounsfield's data to tissue elemental composition. Transition probabilities for both codes are prepared by the SRNADAT code. The simulation of the proton beam characterization by multi-layer Faraday cup, spatial distribution of positron emitters obtained by the SRNA-2KG code and intercomparison of computational codes in radiation dosimetry, indicate immediate application of the Monte Carlo techniques in clinical practice. In this paper, we briefly present the physical model implemented in the SRNA package, the ISTAR proton dose planning software, as well as the results of the numerical experiments with proton beams to obtain 3D dose distribution in the eye and breast tumour.
The Monte Carlo SRNA-VOX code for 3D proton dose distribution in voxelized geometry using CT data.
Ilić, Radovan D; Spasić-Jokić, Vesna; Belicev, Petar; Dragović, Milos
2005-03-01
This paper describes the application of the SRNA Monte Carlo package for proton transport simulations in complex geometry and different material compositions. The SRNA package was developed for 3D dose distribution calculation in proton therapy and dosimetry and it was based on the theory of multiple scattering. The decay of proton induced compound nuclei was simulated by the Russian MSDM model and our own using ICRU 63 data. The developed package consists of two codes: the SRNA-2KG, which simulates proton transport in combinatorial geometry and the SRNA-VOX, which uses the voxelized geometry using the CT data and conversion of the Hounsfield's data to tissue elemental composition. Transition probabilities for both codes are prepared by the SRNADAT code. The simulation of the proton beam characterization by multi-layer Faraday cup, spatial distribution of positron emitters obtained by the SRNA-2KG code and intercomparison of computational codes in radiation dosimetry, indicate immediate application of the Monte Carlo techniques in clinical practice. In this paper, we briefly present the physical model implemented in the SRNA package, the ISTAR proton dose planning software, as well as the results of the numerical experiments with proton beams to obtain 3D dose distribution in the eye and breast tumour. PMID:15798273
Heat Transfer Boundary Conditions in the RELAP5-3D Code
Richard A. Riemke; Cliff B. Davis; Richard R. Schultz
2008-05-01
The heat transfer boundary conditions used in the RELAP5-3D computer program have evolved over the years. Currently, RELAP5-3D has the following options for the heat transfer boundary conditions: (a) heat transfer correlation package option, (b) non-convective option (from radiation/conduction enclosure model or symmetry/insulated conditions), and (c) other options (setting the surface temperature to a volume fraction averaged fluid temperature of the boundary volume, obtaining the surface temperature from a control variable, obtaining the surface temperature from a time-dependent general table, obtaining the heat flux from a time-dependent general table, or obtaining heat transfer coefficients from either a time- or temperature-dependent general table). These options will be discussed, including the more recent ones.
Numerical Simulation of Two-grid Ion Optics Using a 3D Code
NASA Technical Reports Server (NTRS)
Anderson, John R.; Katz, Ira; Goebel, Dan
2004-01-01
A three-dimensional ion optics code has been developed under NASA's Project Prometheus to model two grid ion optics systems. The code computes the flow of positive ions from the discharge chamber through the ion optics and into the beam downstream of the thruster. The rate at which beam ions interact with background neutral gas to form charge exchange ions is also computed. Charge exchange ion trajectories are computed to determine where they strike the ion optics grid surfaces and to determine the extent of sputter erosion they cause. The code has been used to compute predictions of the erosion pattern and wear rate on the NSTAR ion optics system; the code predicts the shape of the eroded pattern but overestimates the initial wear rate by about 50%. An example of use of the code to estimate the NEXIS thruster accelerator grid life is also presented.
Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes
Frambati, S.; Frignani, M.
2012-07-01
We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design for radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)
Development of a 3-D upwind PNS code for chemically reacting hypersonic flowfields
NASA Technical Reports Server (NTRS)
Tannehill, J. C.; Wadawadigi, G.
1992-01-01
Two new parabolized Navier-Stokes (PNS) codes were developed to compute the three-dimensional, viscous, chemically reacting flow of air around hypersonic vehicles such as the National Aero-Space Plane (NASP). The first code (TONIC) solves the gas dynamic and species conservation equations in a fully coupled manner using an implicit, approximately-factored, central-difference algorithm. This code was upgraded to include shock fitting and the capability of computing the flow around complex body shapes. The revised TONIC code was validated by computing the chemically-reacting (M(sub infinity) = 25.3) flow around a 10 deg half-angle cone at various angles of attack and the Ames All-Body model at 0 deg angle of attack. The results of these calculations were in good agreement with the results from the UPS code. One of the major drawbacks of the TONIC code is that the central-differencing of fluxes across interior flowfield discontinuities tends to introduce errors into the solution in the form of local flow property oscillations. The second code (UPS), originally developed for a perfect gas, has been extended to permit either perfect gas, equilibrium air, or nonequilibrium air computations. The code solves the PNS equations using a finite-volume, upwind TVD method based on Roe's approximate Riemann solver that was modified to account for real gas effects. The dissipation term associated with this algorithm is sufficiently adaptive to flow conditions that, even when attempting to capture very strong shock waves, no additional smoothing is required. For nonequilibrium calculations, the code solves the fluid dynamic and species continuity equations in a loosely-coupled manner. This code was used to calculate the hypersonic, laminar flow of chemically reacting air over cones at various angles of attack. In addition, the flow around the McDonnel Douglas generic option blended-wing-body was computed and comparisons were made between the perfect gas, equilibrium air, and the
NASA Astrophysics Data System (ADS)
Barreyre, T.; Escartin, J.; Cannat, M.; Garcia, R. A.
2011-12-01
Seafloor imagery provides detailed and accurate constrain on the distribution, geometry, and nature of hydrothermal outflow, and its links to the ecosystems that they sustain. Repeated surveys allow us to evaluate the temporal variability of these systems. Geo-referenced and co-registered photomosaics of the Lucky Strike hydrothermal field (Mid Atlantic Ridge, 37°N), derived from >60,000 seafloor images acquired in 1996, 2006, 2008 and 2009, using deep-towed and ROV vehicles. Newly-developed image processing techniques, specifically tailored to generate giga-mosaics in the underwater environment, include correction of illumination artifacts and removal of the edges between individual images so as to obtain a continuous and single mosaic image over a surface of up ~800x800 m and with a pixel resolution of 5-10 mm. Photomosaicing is complemented by 3D-reconstruction of hydrothermal edifices from video imagery, with the mapping of image texture over the 3D model surface. These image and video data can also be directly linked with high-resolution microbathymetry acquired near-bottom acoustic systems. Preliminary analysis of these mosaics reveals the distribution of low-temperature hydrothermal outflow, recognizable owing to its association with bacterial mats and hydrothermal deposits easily identifiable in the imagery. These low-temperature venting areas, often associated with high-temperature hydrothermal vents, are irregularly distributed throughout the site, defining clusters. In detail, the outflow geometry is largely controlled by the nature of the substrate (e.g., cracks and fissures, diffuse flow patches, existing hydrothermal constructs). The spatial relationships between the high- and diffuse venting as revealed by the imagery provide constraints on the shallow plumbing structure throughout the site.. Imagery provides constraints on temporal variability at two time-scales. First, we can identify changes in the distribution and presence of actively venting
A 3D-CFD code for accurate prediction of fluid flows and fluid forces in seals
NASA Astrophysics Data System (ADS)
Athavale, M. M.; Przekwas, A. J.; Hendricks, R. C.
1994-01-01
Current and future turbomachinery requires advanced seal configurations to control leakage, inhibit mixing of incompatible fluids and to control the rotodynamic response. In recognition of a deficiency in the existing predictive methodology for seals, a seven year effort was established in 1990 by NASA's Office of Aeronautics Exploration and Technology, under the Earth-to-Orbit Propulsion program, to develop validated Computational Fluid Dynamics (CFD) concepts, codes and analyses for seals. The effort will provide NASA and the U.S. Aerospace Industry with advanced CFD scientific codes and industrial codes for analyzing and designing turbomachinery seals. An advanced 3D CFD cylindrical seal code has been developed, incorporating state-of-the-art computational methodology for flow analysis in straight, tapered and stepped seals. Relevant computational features of the code include: stationary/rotating coordinates, cylindrical and general Body Fitted Coordinates (BFC) systems, high order differencing schemes, colocated variable arrangement, advanced turbulence models, incompressible/compressible flows, and moving grids. This paper presents the current status of code development, code demonstration for predicting rotordynamic coefficients, numerical parametric study of entrance loss coefficients for generic annular seals, and plans for code extensions to labyrinth, damping, and other seal configurations.
A Coupled Neutron-Photon 3-D Combinatorial Geometry Monte Carlo Transport Code
1998-06-12
TART97 is a coupled neutron-photon, 3 dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly fast: if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system canmore » save you a great deal of time and energy. TART 97 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and ist data files.« less
Users manual for CAFE-3D : a computational fluid dynamics fire code.
Khalil, Imane; Lopez, Carlos; Suo-Anttila, Ahti Jorma
2005-03-01
The Container Analysis Fire Environment (CAFE) computer code has been developed to model all relevant fire physics for predicting the thermal response of massive objects engulfed in large fires. It provides realistic fire thermal boundary conditions for use in design of radioactive material packages and in risk-based transportation studies. The CAFE code can be coupled to commercial finite-element codes such as MSC PATRAN/THERMAL and ANSYS. This coupled system of codes can be used to determine the internal thermal response of finite element models of packages to a range of fire environments. This document is a user manual describing how to use the three-dimensional version of CAFE, as well as a description of CAFE input and output parameters. Since this is a user manual, only a brief theoretical description of the equations and physical models is included.
Video coding for next-generation surveillance systems
NASA Astrophysics Data System (ADS)
Klasen, Lena M.; Fahlander, Olov
1997-02-01
Video is used as recording media in surveillance system and also more frequently by the Swedish Police Force. Methods for analyzing video using an image processing system have recently been introduced at the Swedish National Laboratory of Forensic Science, and new methods are in focus in a research project at Linkoping University, Image Coding Group. The accuracy of the result of those forensic investigations often depends on the quality of the video recordings, and one of the major problems when analyzing videos from crime scenes is the poor quality of the recordings. Enhancing poor image quality might add manipulative or subjective effects and does not seem to be the right way of getting reliable analysis results. The surveillance system in use today is mainly based on video techniques, VHS or S-VHS, and the weakest link is the video cassette recorder, (VCR). Multiplexers for selecting one of many camera outputs for recording is another problem as it often filters the video signal, and recording is limited to only one of the available cameras connected to the VCR. A way to get around the problem of poor recording is to simultaneously record all camera outputs digitally. It is also very important to build such a system bearing in mind that image processing analysis methods becomes more important as a complement to the human eye. Using one or more cameras gives a large amount of data, and the need for data compression is more than obvious. Crime scenes often involve persons or moving objects, and the available coding techniques are more or less useful. Our goal is to propose a possible system, being the best compromise with respect to what needs to be recorded, movements in the recorded scene, loss of information and resolution etc., to secure the efficient recording of the crime and enable forensic analysis. The preventative effective of having a well functioning surveillance system and well established image analysis methods is not to be neglected. Aspects of
PORTA: A Massively Parallel Code for 3D Non-LTE Polarized Radiative Transfer
NASA Astrophysics Data System (ADS)
Štěpán, J.
2014-10-01
The interpretation of the Stokes profiles of the solar (stellar) spectral line radiation requires solving a non-LTE radiative transfer problem that can be very complex, especially when the main interest lies in modeling the linear polarization signals produced by scattering processes and their modification by the Hanle effect. One of the main difficulties is due to the fact that the plasma of a stellar atmosphere can be highly inhomogeneous and dynamic, which implies the need to solve the non-equilibrium problem of generation and transfer of polarized radiation in realistic three-dimensional stellar atmospheric models. Here we present PORTA, a computer program we have developed for solving, in three-dimensional (3D) models of stellar atmospheres, the problem of the generation and transfer of spectral line polarization taking into account anisotropic radiation pumping and the Hanle and Zeeman effects in multilevel atoms. The numerical method of solution is based on a highly convergent iterative algorithm, whose convergence rate is insensitive to the grid size, and on an accurate short-characteristics formal solver of the Stokes-vector transfer equation which uses monotonic Bezier interpolation. In addition to the iterative method and the 3D formal solver, another important feature of PORTA is a novel parallelization strategy suitable for taking advantage of massively parallel computers. Linear scaling of the solution with the number of processors allows to reduce the solution time by several orders of magnitude. We present useful benchmarks and a few illustrations of applications using a 3D model of the solar chromosphere resulting from MHD simulations. Finally, we present our conclusions with a view to future research. For more details see Štěpán & Trujillo Bueno (2013).
Ultrafast vectorized multispin coding algorithm for the Monte Carlo simulation of the 3D Ising model
NASA Astrophysics Data System (ADS)
Wansleben, Stephan
1987-02-01
A new Monte Carlo algorithm for the 3D Ising model and its implementation on a CDC CYBER 205 is presented. This approach is applicable to lattices with sizes between 3·3·3 and 192·192·192 with periodic boundary conditions, and is adjustable to various kinetic models. It simulates a canonical ensemble at given temperature generating a new random number for each spin flip. For the Metropolis transition probability the speed is 27 ns per updates on a two-pipe CDC Cyber 205 with 2 million words physical memory, i.e. 1.35 times the cycle time per update or 38 million updates per second.
NASA Astrophysics Data System (ADS)
Guo, Lilin; Zhou, Lunan; Tian, Xiang; Chen, Yaowu
2016-05-01
The three-dimensional high-efficiency video coding (3-D-HEVC) is an emerging compression standard for multiview video plus depth data. In addition to the quad-tree coding structure inherited from HEVC, some tools are integrated, which significantly improve the coding efficiency but also result in remarkably high computational complexity. We propose a fast coding unit (CU) size decision algorithm for both depth and texture components in dependent views, where hole-filling maps created through view synthesis are utilized. First, after coding the base view, warp it onto each dependent view via depth image based rendering, during which hole-filling maps are generated. Then for depth in dependent views, CU splitting can be early terminated considering the disoccluded information from hole-filling maps; for texture in dependent views, combining the disoccluded information and the interview correlations, the CU partitioning process can also be accelerated. Experimental results show that the proposed algorithm can achieve on average 54.3% time reduction, with a negligible Bjøntegaard delta bitrate increase of 0.15% on synthesized views, and a 0.05% increase on all the coded plus synthesized views compared with the original encoding scheme in a 3-D-HEVC test model.
Comparison of a 3-D GPU-Assisted Maxwell Code and Ray Tracing for Reflectometry on ITER
NASA Astrophysics Data System (ADS)
Gady, Sarah; Kubota, Shigeyuki; Johnson, Irena
2015-11-01
Electromagnetic wave propagation and scattering in magnetized plasmas are important diagnostics for high temperature plasmas. 1-D and 2-D full-wave codes are standard tools for measurements of the electron density profile and fluctuations; however, ray tracing results have shown that beam propagation in tokamak plasmas is inherently a 3-D problem. The GPU-Assisted Maxwell Code utilizes the FDTD (Finite-Difference Time-Domain) method for solving the Maxwell equations with the cold plasma approximation in a 3-D geometry. Parallel processing with GPGPU (General-Purpose computing on Graphics Processing Units) is used to accelerate the computation. Previously, we reported on initial comparisons of the code results to 1-D numerical and analytical solutions, where the size of the computational grid was limited by the on-board memory of the GPU. In the current study, this limitation is overcome by using domain decomposition and an additional GPU. As a practical application, this code is used to study the current design of the ITER Low Field Side Reflectometer (LSFR) for the Equatorial Port Plug 11 (EPP11). A detailed examination of Gaussian beam propagation in the ITER edge plasma will be presented, as well as comparisons with ray tracing. This work was made possible by funding from the Department of Energy for the Summer Undergraduate Laboratory Internship (SULI) program. This work is supported by the US DOE Contract No.DE-AC02-09CH11466 and DE-FG02-99-ER54527.
Fullwave coupling to a 3D antenna code using Green's function formulation of wave-particle response.
NASA Astrophysics Data System (ADS)
Wright, John; Bonoli, Paul; Brambilla, Marco; Lancelloti, Vito; Maggiora, Riccardo; Carter, Mark
2006-04-01
Using the fullwave code, TORIC ,and the 3D antenna code, TOPICA, we construct a complete linear system for the RF driven plasma. The 3D finite element antenna code, TOPICA, requires an admittance, Y, for the plasma, where B=YE. In this work TORIC was modified to allow excitation of the (Eη, Eζ) electric field components at the plasma surface, corresponding to a single poloidal and toroidal mode number combination (m,n). This leads the tensor response: Y=( *20c Yηη & Yηζ Yζη & Yζζ ), where each of the Yn submatrices is Nm in size. It is shown that the admittance matrix is equivalent to a Greens function calculation for the fullwave system and in addition, the net work done in the calculation is on the order of twice a single fullwave calculation. After the admittance calculation is done, the response of a plasma to an antenna driven at a given frequency can be calculated by only running the TOPICA code for a new antenna geometry. In tests of loading, TOPICA has been able reproduce loading of the Alcator D antenna (S12 coefficient accurately.).
Unsteady Analysis of Inlet-Compressor Acoustic Interactions Using Coupled 3-D and 1-D CFD Codes
NASA Technical Reports Server (NTRS)
Suresh, A.; Cole, G. L.
2000-01-01
It is well known that the dynamic response of a mixed compression supersonic inlet is very sensitive to the boundary condition imposed at the subsonic exit (engine face) of the inlet. In previous work, a 3-D computational fluid dynamics (CFD) inlet code (NPARC) was coupled at the engine face to a 3-D turbomachinery code (ADPAC) simulating an isolated rotor and the coupled simulation used to study the unsteady response of the inlet. The main problem with this approach is that the high fidelity turbomachinery simulation becomes prohibitively expensive as more stages are included in the simulation. In this paper, an alternative approach is explored, wherein the inlet code is coupled to a lesser fidelity 1-D transient compressor code (DYNTECC) which simulates the whole compressor. The specific application chosen for this evaluation is the collapsing bump experiment performed at the University of Cincinnati, wherein reflections of a large-amplitude acoustic pulse from a compressor were measured. The metrics for comparison are the pulse strength (time integral of the pulse amplitude) and wave form (shape). When the compressor is modeled by stage characteristics the computed strength is about ten percent greater than that for the experiment, but the wave shapes are in poor agreement. An alternate approach that uses a fixed rise in duct total pressure and temperature (so-called 'lossy' duct) to simulate a compressor gives good pulse shapes but the strength is about 30 percent low.
Enhanced view random access ability for multiview video coding
NASA Astrophysics Data System (ADS)
Elmesloul Nasri, Seif Allah; Khelil, Khaled; Doghmane, Noureddine
2016-03-01
Apart from the efficient compression, reducing the complexity of the view random access is one of the most important requirements that should be considered in multiview video coding. In order to obtain an efficient compression, both temporal and inter-view correlations are exploited in the multiview video coding schemes, introducing higher complexity in the temporal and view random access. We propose an inter-view prediction structure that aims to lower the cost of randomly accessing any picture at any position and instant, with respect to the multiview reference model JMVM and other recent relevant works. The proposed scheme is mainly based on the use of two base views (I-views) in the structure with selected positions instead of a single reference view as in the standard structures. This will, therefore, provide a direct inter-view prediction for all the remaining views and will ensure a low-delay view random access ability while maintaining a very competitive bit-rate performance with a similar video quality measured in peak signal-to-noise ratio. In addition to a new evaluation method of the random access ability, the obtained results show a significant improvement in the view random accessibility with respect to other reported works.
Block-based embedded color image and video coding
NASA Astrophysics Data System (ADS)
Nagaraj, Nithin; Pearlman, William A.; Islam, Asad
2004-01-01
Set Partitioned Embedded bloCK coder (SPECK) has been found to perform comparable to the best-known still grayscale image coders like EZW, SPIHT, JPEG2000 etc. In this paper, we first propose Color-SPECK (CSPECK), a natural extension of SPECK to handle color still images in the YUV 4:2:0 format. Extensions to other YUV formats are also possible. PSNR results indicate that CSPECK is among the best known color coders while the perceptual quality of reconstruction is superior than SPIHT and JPEG2000. We then propose a moving picture based coding system called Motion-SPECK with CSPECK as the core algorithm in an intra-based setting. Specifically, we demonstrate two modes of operation of Motion-SPECK, namely the constant-rate mode where every frame is coded at the same bit-rate and the constant-distortion mode, where we ensure the same quality for each frame. Results on well-known CIF sequences indicate that Motion-SPECK performs comparable to Motion-JPEG2000 while the visual quality of the sequence is in general superior. Both CSPECK and Motion-SPECK automatically inherit all the desirable features of SPECK such as embeddedness, low computational complexity, highly efficient performance, fast decoding and low dynamic memory requirements. The intended applications of Motion-SPECK would be high-end and emerging video applications such as High Quality Digital Video Recording System, Internet Video, Medical Imaging etc.
NASA Astrophysics Data System (ADS)
Velioǧlu, Deniz; Cevdet Yalçıner, Ahmet; Zaytsev, Andrey
2016-04-01
Tsunamis are huge waves with long wave periods and wave lengths that can cause great devastation and loss of life when they strike a coast. The interest in experimental and numerical modeling of tsunami propagation and inundation increased considerably after the 2011 Great East Japan earthquake. In this study, two numerical codes, FLOW 3D and NAMI DANCE, that analyze tsunami propagation and inundation patterns are considered. Flow 3D simulates linear and nonlinear propagating surface waves as well as long waves by solving three-dimensional Navier-Stokes (3D-NS) equations. NAMI DANCE uses finite difference computational method to solve 2D depth-averaged linear and nonlinear forms of shallow water equations (NSWE) in long wave problems, specifically tsunamis. In order to validate these two codes and analyze the differences between 3D-NS and 2D depth-averaged NSWE equations, two benchmark problems are applied. One benchmark problem investigates the runup of long waves over a complex 3D beach. The experimental setup is a 1:400 scale model of Monai Valley located on the west coast of Okushiri Island, Japan. Other benchmark problem is discussed in 2015 National Tsunami Hazard Mitigation Program (NTHMP) Annual meeting in Portland, USA. It is a field dataset, recording the Japan 2011 tsunami in Hilo Harbor, Hawaii. The computed water surface elevation and velocity data are compared with the measured data. The comparisons showed that both codes are in fairly good agreement with each other and benchmark data. The differences between 3D-NS and 2D depth-averaged NSWE equations are highlighted. All results are presented with discussions and comparisons. Acknowledgements: Partial support by Japan-Turkey Joint Research Project by JICA on earthquakes and tsunamis in Marmara Region (JICA SATREPS - MarDiM Project), 603839 ASTARTE Project of EU, UDAP-C-12-14 project of AFAD Turkey, 108Y227, 113M556 and 213M534 projects of TUBITAK Turkey, RAPSODI (CONCERT_Dis-021) of CONCERT
NASA Technical Reports Server (NTRS)
Meyer, Harold D.
1999-01-01
This report provides a study of rotor and stator scattering using the SOURCE3D Rotor Wake/Stator Interaction Code. SOURCE3D is a quasi-three-dimensional computer program that uses three-dimensional acoustics and two-dimensional cascade load response theory to calculate rotor and stator modal reflection and transmission (scattering) coefficients. SOURCE3D is at the core of the TFaNS (Theoretical Fan Noise Design/Prediction System), developed for NASA, which provides complete fully coupled (inlet, rotor, stator, exit) noise solutions for turbofan engines. The reason for studying scattering is that we must first understand the behavior of the individual scattering coefficients provided by SOURCE3D, before eventually understanding the more complicated predictions from TFaNS. To study scattering, we have derived a large number of scattering curves for vane and blade rows. The curves are plots of output wave power divided by input wave power (in dB units) versus vane/blade ratio. Some of these plots are shown in this report. All of the plots are provided in a separate volume. To assist in understanding the plots, formulas have been derived for special vane/blade ratios for which wavefronts are either parallel or normal to rotor or stator chords. From the plots, we have found that, for the most part, there was strong transmission and weak reflection over most of the vane/blade ratio range for the stator. For the rotor, there was little transmission loss.
3D Polarized Radiative Transfer for Solar System Applications Using the public-domain HYPERION Code
NASA Astrophysics Data System (ADS)
Wolff, M. J.; Robitaille, T.; Whitney, B. A.
2012-12-01
We present a public-domain radiative transfer tool that will allow researchers to examine a wide-range of interesting solar system applications. Hyperion is a new three-dimensional continuum Monte-Carlo radiative transfer code that is designed to be as general as possible, allowing radiative transfer to be computed through a variety of three-dimensional grids (Robitaille, 2011, Astronomy & Astrophysics 536 A79). The main part of the code is problem-independent, and only requires the user to define the three-dimensional density structure, and the opacity and the illumination properties (as well as a few parameters that control execution and output of the code). Hyperion is written in Fortran 90 and parallelized using the MPI-2 standard. It is bundled with Python libraries that enable very flexible pre- and post-processing options (arbitrary shapes, multiple aerosol components, etc.). These routines are very amenable to user-extensibility. The package is currently distributed at www.hyperion-rt.org. Our presentation will feature 1) a brief overview of the code, including a description of the solar system-specific modifications that we have made beyond the capabilities in the original release; 2) Several solar system applications (i.e., Deep Impact Plume, Martian atmosphere, etc.); 3) discussion of availability and distribution of code components via www.hyperion-rt.org.
Numerical simulation of jet aerodynamics using the three-dimensional Navier-Stokes code PAB3D
NASA Technical Reports Server (NTRS)
Pao, S. Paul; Abdol-Hamid, Khaled S.
1996-01-01
This report presents a unified method for subsonic and supersonic jet analysis using the three-dimensional Navier-Stokes code PAB3D. The Navier-Stokes code was used to obtain solutions for axisymmetric jets with on-design operating conditions at Mach numbers ranging from 0.6 to 3.0, supersonic jets containing weak shocks and Mach disks, and supersonic jets with nonaxisymmetric nozzle exit geometries. This report discusses computational methods, code implementation, computed results, and comparisons with available experimental data. Very good agreement is shown between the numerical solutions and available experimental data over a wide range of operating conditions. The Navier-Stokes method using the standard Jones-Launder two-equation kappa-epsilon turbulence model can accurately predict jet flow, and such predictions are made without any modification to the published constants for the turbulence model.
Complexity control for high-efficiency video coding by coding layers complexity allocations
NASA Astrophysics Data System (ADS)
Fang, Jiunn-Tsair; Liang, Kai-Wen; Chen, Zong-Yi; Hsieh, Wei; Chang, Pao-Chi
2016-03-01
The latest video compression standard, high-efficiency video coding (HEVC), provides quad-tree structures of coding units (CUs) and four coding tree depths to facilitate coding efficiency. The HEVC encoder considerably increases the computational complexity to levels inappropriate for video applications of power-constrained devices. This work, therefore, proposes a complexity control method for the low-delay P-frame configuration of the HEVC encoder. The complexity control mechanism is among the group of pictures layer, frame layer, and CU layer, and each coding layer provides a distinct method for complexity allocation. Furthermore, the steps in the prediction unit encoding procedure are reordered. By allocating the complexity to each coding layer of HEVC, the proposed method can simultaneously satisfy the entire complexity constraint (ECC) for entire sequence encoding and the instant complexity constraint (ICC) for each frame during real-time encoding. Experimental results showed that as the target complexity under both the ECC and ICC was reduced to 80% and 60%, respectively, the decrease in the average Bjøntegaard delta peak signal-to-noise ratio was ˜0.1 dB with an increase of 1.9% in the Bjøntegaard delta rate, and the complexity control error was ˜4.3% under the ECC and 4.3% under the ICC.
A 3D-PNS computer code for the calculation of supersonic combusting flows
NASA Technical Reports Server (NTRS)
Chitsomboon, Tawit; Northam, G. Burton
1988-01-01
A computer code has been developed based on the three-dimensional parabolized Navier-Stokes (PNS) equations which govern the supersonic combusting flow of the hydrogen-air system. The finite difference algorithm employed was a hybrid of the Schiff-Steger algorithm and the Vigneron, et al., algorithm which is fully implicit and fully coupled. The combustion of hydrogen and air was modeled by the finite-rate two-step combustion model of Rogers-Chinitz. A new dependent variable vector was introduced to simplify the numerical algorithm. Robustness of the algorithm was considerably enhanced by introducing an adjustable parameter. The computer code was used to solve a premixed shock-induced combustion problem and the results were compared with those of a full Navier-Stokes code. Reasonably good agreement was obtained at a fraction of the cost of the full Navier-Stokes procedure.
3-D kinetics simulations of the NRU reactor using the DONJON code
Leung, T. C.; Atfield, M. D.; Koclas, J.
2006-07-01
The NRU reactor is highly heterogeneous, heavy-water cooled and moderated, with online refuelling capability. It is licensed to operate at a maximum power of 135 MW, with a peak thermal flux of approximately 4.0 x 10{sup 18} n.m{sup -2} . s{sup -1}. In support of the safe operation of NRU, three-dimensional kinetics calculations for reactor transients have been performed using the DONJON code. The code was initially designed to perform space-time kinetics calculations for the CANDU{sup R} power reactors. This paper describes how the DONJON code can be applied to perform neutronic simulations for the analysis of reactor transients in NRU, and presents calculation results for some transients. (authors)
Code System for 2-Group, 3D Neutronic Kinetics Calculations Coupled to Core Thermal Hydraulics.
2000-05-12
Version 00 QUARK is a combined computer program comprising a revised version of the QUANDRY three-dimensional, two-group neutron kinetics code and an upgraded version of the COBRA transient core analysis code (COBRA-EN). Starting from either a critical steady-state (k-effective or critical dilute Boron problem) or a subcritical steady-state (fixed source problem) in a PWR plant, the code allows one to simulate the neutronic and thermal-hydraulic core transient response to reactivity accidents initiated both inside themore » vessel (such as a control rod ejection) and outside the vessel (such as the sudden change of the Boron concentration in the coolant). QUARK output can be used as input to PSR-470/NORMA-FP to perform a subchannel analysis from converged coarse-mesh nodal solutions.« less
Fast coding unit selection method for high efficiency video coding intra prediction
NASA Astrophysics Data System (ADS)
Xiong, Jian
2013-07-01
The high efficiency video coding (HEVC) video coding standard under development can achieve higher compression performance than previous standards, such as MPEG-4, H.263, and H.264/AVC. To improve coding performance, a quad-tree coding structure and a robust rate-distortion (RD) optimization technique is used to select an optimum coding mode. Since the RD costs of all possible coding modes are computed to decide an optimum mode, high computational complexity is induced in the encoder. A fast learning-based coding unit (CU) size selection method is presented for HEVC intra prediction. The proposed algorithm is based on theoretical analysis that shows the non-normalized histogram of oriented gradient (n-HOG) can be used to help select CU size. A codebook is constructed offline by clustering n-HOGs of training sequences for each CU size. The optimum size is determined by comparing the n-HOG of the current CU with the learned codebooks. Experimental results show that the CU size selection scheme speeds up intra coding significantly with negligible loss of peak signal-to-noise ratio.
Dependent video coding using a tree representation of pixel dependencies
NASA Astrophysics Data System (ADS)
Amati, Luca; Valenzise, Giuseppe; Ortega, Antonio; Tubaro, Stefano
2011-09-01
Motion-compensated prediction induces a chain of coding dependencies between pixels in video. In principle, an optimal selection of encoding parameters (motion vectors, quantization parameters, coding modes) should take into account the whole temporal horizon of a GOP. However, in practical coding schemes, these choices are made on a frame-by-frame basis, thus with a possible loss of performance. In this paper we describe a tree-based model for pixelwise coding dependencies: each pixel in a frame is the child of a pixel in a previous reference frame. We show that some tree structures are more favorable than others from a rate-distortion perspective, e.g., because they entail a large descendance of pixels which are well predicted from a common ancestor. In those cases, a higher quality has to be assigned to pixels at the top of such trees. We promote the creation of these structures by adding a special discount term to the conventional Lagrangian cost adopted at the encoder. The proposed model can be implemented through a double-pass encoding procedure. Specifically, we devise heuristic cost functions to drive the selection of quantization parameters and of motion vectors, which can be readily implemented into a state-of-the-art H.264/AVC encoder. Our experiments demonstrate that coding efficiency is improved for video sequences with low motion, while there are no apparent gains for more complex motion. We argue that this is due to both the presence of complex encoder features not captured by the model, and to the complexity of the source to be encoded.
3D particle simulation of beams using the WARP code: Transport around bends
Friedman, A.; Grote, D.P.; Callahan, D.A.; Langdon, A.B. ); Haber, I. )
1990-11-30
WARP is a discrete-particle simulation program which was developed for studies of space charge dominated ion beams. It combines features of an accelerator code and a particle-in-cell plasma simulation. The code architecture, and techniques employed to enhance efficiency, are briefly described. Current applications are reviewed. In this paper we emphasize the physics of transport of three-dimensional beams around bends. We present a simple bent-beam PIC algorithm. Using this model, we have followed a long, thin beam around a bend in a simple racetrack system (assuming straight-pipe self-fields). Results on beam dynamics are presented; no transverse emittance growth (at mid-pulse) is observed. 11 refs., 5 figs.
Modeling Star-Forming Regions using a 3D Molecular Transport Code
NASA Astrophysics Data System (ADS)
Loughnane, R. M.; Redman, M. P.; Keto, E. R.
2012-07-01
This paper presents the 3-dimensional non-LTE radiative transfer code, MOLLIE (MOLelcular LIne Explorer), for solving molecular and atomic excitation and radiation transfer in a molecular gas and predicting emergent spectra. The code implementation makes use of the Accelerated Lambda Iteration (ALI) method of Rybicki & Hummer (1991) to solve the radiative transfer equation along rays passing through a spherical model cloud. When convergence between level populations, the radiation field, and the point separation has been obtained, the grid is ray-traced to produce images that can be readily compared to observations. The optimization technique, Fast Simulated Annealing (FSA), adopted by MOLLIE to increase the probability of arriving at a satisfactory output in a timely fashion, is briefly considered.
Applications of the 3-D Deterministic Transport Code Attlla for Core Safety Analysis
D. S. Lucas
2004-10-01
An LDRD (Laboratory Directed Research and Development) project is ongoing at the Idaho National Engineering and Environmental Laboratory (INEEL) for applying the three-dimensional multi-group deterministic neutron transport code (Attila®) to criticality, flux and depletion calculations of the Advanced Test Reactor (ATR). This paper discusses the model development, capabilities of Attila, generation of the cross-section libraries, and comparisons to an ATR MCNP model and future.
Enhancements, Parallelization and Future Directions of the V3FIT 3-D Equilibrium Reconstruction Code
NASA Astrophysics Data System (ADS)
Cianciosa, M. R.; Hanson, J. D.; Maurer, D. A.; Hartwell, G. J.; Archmiller, M. C.; Ma, X.; Herfindal, J.
2014-10-01
Three-dimensional equilibrium reconstruction is spreading beyond its original application to stellarators. Three-dimensional effects in nominally axisymmetric systems, including quasi-helical states in reversed field pinches and error fields in tokamaks, are becoming increasingly important. V3FIT is a fully three dimensional equilibrium reconstruction code in widespread use throughout the fusion community. The code has recently undergone extensive revision to prepare for the next generation of equilibrium reconstruction problems. The most notable changes are the abstraction of the equilibrium model, the propagation of experimental errors to the reconstructed results, support for multicolor soft x-ray emissivity cameras, and recent efforts to add parallelization for efficient computation on multi-processor system. Work presented will contain discussions on these new capabilities. We will compare probability distributions of reconstructed parameters with results from whole shot reconstructions. We will show benchmarking and profiling results of initial performance improvements through the addition of OpenMP and MPI support. We will discuss future directions of the V3FIT code including steps taken for support of the W-7X stellarator. Work supported by US. Department of Energy Grant No. DEFG-0203-ER-54692B.
Drug-laden 3D biodegradable label using QR code for anti-counterfeiting of drugs.
Fei, Jie; Liu, Ran
2016-06-01
Wiping out counterfeit drugs is a great task for public health care around the world. The boost of these drugs makes treatment to become potentially harmful or even lethal. In this paper, biodegradable drug-laden QR code label for anti-counterfeiting of drugs is proposed that can provide the non-fluorescence recognition and high capacity. It is fabricated by the laser cutting to achieve the roughness over different surface which causes the difference in the gray levels on the translucent material the QR code pattern, and the micro mold process to obtain the drug-laden biodegradable label. We screened biomaterials presenting the relevant conditions and further requirements of the package. The drug-laden microlabel is on the surface of the troches or the bottom of the capsule and can be read by a simple smartphone QR code reader application. Labeling the pill directly and decoding the information successfully means more convenient and simple operation with non-fluorescence and high capacity in contrast to the traditional methods. PMID:27040262
Evaluation of 3D Inverse Code Using Rotor 67 as Test Case
NASA Technical Reports Server (NTRS)
Dang, T.
1998-01-01
A design modification of Rotor 67 is carried out with a full 3D inverse method. The blade camber surface is modified to produce a prescribed pressure loading distribution, with the blade tangential thickness distribution and the blade stacking line at midchord kept the same as the original Rotor 67 design. Because of the inviscid-flow assumption used in the current version of the method, Rotor 67 geometry is modified for use at a design point different from the original design value. A parametric study with the prescribed pressure loading distribution yields the following results. In the subsonic section, smooth pressure loading shapes generally produce blades with well-behaved blade surface pressure distributions. In the supersonic section, the study shows that the strength and position of the passage shock correlate with the characteristics of the blade pressure loading shape. In general, "smooth" prescribed blade pressure loading distributions generate blade designs with reverse cambers which have the effect of weakening the passage shock.
Fast motion prediction algorithm for multiview video coding
NASA Astrophysics Data System (ADS)
Abdelazim, Abdelrahman; Zhang, Guang Y.; Mein, Stephen J.; Varley, Martin R.; Ait-Boudaoud, Djamel
2011-06-01
Multiview Video Coding (MVC) is an extension to the H.264/MPEG-4 AVC video compression standard developed with joint efforts by MPEG/VCEG to enable efficient encoding of sequences captured simultaneously from multiple cameras using a single video stream. Therefore the design is aimed at exploiting inter-view dependencies in addition to reducing temporal redundancies. However, this further increases the overall encoding complexity In this paper, the high correlation between a macroblock and its enclosed partitions is utilised to estimate motion homogeneity, and based on the result inter-view prediction is selectively enabled or disabled. Moreover, if the MVC is divided into three layers in terms of motion prediction; the first being the full and sub-pixel motion search, the second being the mode selection process and the third being repetition of the first and second for inter-view prediction, the proposed algorithm significantly reduces the complexity in the three layers. To assess the proposed algorithm, a comprehensive set of experiments were conducted. The results show that the proposed algorithm significantly reduces the motion estimation time whilst maintaining similar Rate Distortion performance, when compared to both the H.264/MVC reference software and recently reported work.
Dong, Xiaoqing; Fang, Yiliang; Wang, Kejing; Zhu, Lijuan; Wang, Ke; Huang, Tao
2016-01-01
With the development of new technologies in transcriptome and epigenetics, RNAs have been identified to play more and more important roles in life processes. Consequently, various methods have been proposed to assess the biological functions of RNAs and thus classify them functionally, among which comparative study of RNA structures is perhaps the most important one. To measure the structural similarity of RNAs and classify them, we propose a novel three dimensional (3D) graphical representation of RNA secondary structure, in which an RNA secondary structure is first transformed into a characteristic sequence based on chemical property of nucleic acids; a dynamic 3D graph is then constructed for the characteristic sequence; and lastly a numerical characterization of the 3D graph is used to represent the RNA secondary structure. We tested our algorithm on three datasets: (1) Dataset I consisting of nine RNA secondary structures of viruses, (2) Dataset II consisting of complex RNA secondary structures including pseudo-knots, and (3) Dataset III consisting of 18 non-coding RNA families. We also compare our method with other nine existing methods using Dataset II and III. The results demonstrate that our method is better than other methods in similarity measurement and classification of RNA secondary structures. PMID:27213271
Validation of a Node-Centered Wall Function Model for the Unstructured Flow Code FUN3D
NASA Technical Reports Server (NTRS)
Carlson, Jan-Renee; Vasta, Veer N.; White, Jeffery
2015-01-01
In this paper, the implementation of two wall function models in the Reynolds averaged Navier-Stokes (RANS) computational uid dynamics (CFD) code FUN3D is described. FUN3D is a node centered method for solving the three-dimensional Navier-Stokes equations on unstructured computational grids. The first wall function model, based on the work of Knopp et al., is used in conjunction with the one-equation turbulence model of Spalart-Allmaras. The second wall function model, also based on the work of Knopp, is used in conjunction with the two-equation k-! turbulence model of Menter. The wall function models compute the wall momentum and energy flux, which are used to weakly enforce the wall velocity and pressure flux boundary conditions in the mean flow momentum and energy equations. These wall conditions are implemented in an implicit form where the contribution of the wall function model to the Jacobian are also included. The boundary conditions of the turbulence transport equations are enforced explicitly (strongly) on all solid boundaries. The use of the wall function models is demonstrated on four test cases: a at plate boundary layer, a subsonic di user, a 2D airfoil, and a 3D semi-span wing. Where possible, different near-wall viscous spacing tactics are examined. Iterative residual convergence was obtained in most cases. Solution results are compared with theoretical and experimental data for several variations of grid spacing. In general, very good comparisons with data were achieved.
CFD code calibration and inlet-fairing effects on a 3D hypersonic powered-simulation model
NASA Technical Reports Server (NTRS)
Huebner, Lawrence D.; Tatum, Kenneth E.
1993-01-01
A three-dimensional (3D) computational study has been performed addressing issues related to the wind tunnel testing of a hypersonic powered-simulation model. The study consisted of three objectives. The first objective was to calibrate a state-of-the-art computational fluid dynamics (CFD) code in its ability to predict hypersonic powered-simulation flows by comparing CFD solutions with experimental surface pressure dam. Aftbody lower surface pressures were well predicted, but lower surface wing pressures were less accurately predicted. The second objective was to determine the 3D effects on the aftbody created by fairing over the inlet; this was accomplished by comparing the CFD solutions of two closed-inlet powered configurations with a flowing-inlet powered configuration. Although results at four freestream Mach numbers indicate that the exhaust plume tends to isolate the aftbody surface from most forebody flowfield differences, a smooth inlet fairing provides the least aftbody force and moment variation compared to a flowing inlet. The final objective was to predict and understand the 3D characteristics of exhaust plume development at selected points on a representative flight path. Results showed a dramatic effect of plume expansion onto the wings as the freestream Mach number and corresponding nozzle pressure ratio are increased.
CFD Code Calibration and Inlet-Fairing Effects On a 3D Hypersonic Powered-Simulation Model
NASA Technical Reports Server (NTRS)
Huebner, Lawrence D.; Tatum, Kenneth E.
1993-01-01
A three-dimensional (3D) computational study has been performed addressing issues related to the wind tunnel testing of a hypersonic powered-simulation model. The study consisted of three objectives. The first objective was to calibrate a state-of-the-art computational fluid dynamics (CFD) code in its ability to predict hypersonic powered-simulation flows by comparing CFD solutions with experimental surface pressure data. Aftbody lower surface pressures were well predicted, but lower surface wing pressures were less accurately predicted. The second objective was to determine the 3D effects on the aftbody created by fairing over the inlet; this was accomplished by comparing the CFD solutions of two closed-inlet powered configurations with a flowing- inlet powered configuration. Although results at four freestream Mach numbers indicate that the exhaust plume tends to isolate the aftbody surface from most forebody flow- field differences, a smooth inlet fairing provides the least aftbody force and moment variation compared to a flowing inlet. The final objective was to predict and understand the 3D characteristics of exhaust plume development at selected points on a representative flight path. Results showed a dramatic effect of plume expansion onto the wings as the freestream Mach number and corresponding nozzle pressure ratio are increased.
Predictions of bubbly flows in vertical pipes using two-fluid models in CFDS-FLOW3D code
Banas, A.O.; Carver, M.B.; Unrau, D.
1995-09-01
This paper reports the results of a preliminary study exploring the performance of two sets of two-fluid closure relationships applied to the simulation of turbulent air-water bubbly upflows through vertical pipes. Predictions obtained with the default CFDS-FLOW3D model for dispersed flows were compared with the predictions of a new model (based on the work of Lee), and with the experimental data of Liu. The new model, implemented in the CFDS-FLOW3D code, included additional source terms in the {open_quotes}standard{close_quotes} {kappa}-{epsilon} transport equations for the liquid phase, as well as modified model coefficients and wall functions. All simulations were carried out in a 2-D axisymmetric format, collapsing the general multifluid framework of CFDS-FLOW3D to the two-fluid (air-water) case. The newly implemented model consistently improved predictions of radial-velocity profiles of both phases, but failed to accurately reproduce the experimental phase-distribution data. This shortcoming was traced to the neglect of anisotropic effects in the modelling of liquid-phase turbulence. In this sense, the present investigation should be considered as the first step toward the ultimate goal of developing a theoretically sound and universal CFD-type two-fluid model for bubbly flows in channels.
Zhang, Yi; Huang, Haiyun; Dong, Xiaoqing; Fang, Yiliang; Wang, Kejing; Zhu, Lijuan; Wang, Ke; Huang, Tao; Yang, Jialiang
2016-01-01
With the development of new technologies in transcriptome and epigenetics, RNAs have been identified to play more and more important roles in life processes. Consequently, various methods have been proposed to assess the biological functions of RNAs and thus classify them functionally, among which comparative study of RNA structures is perhaps the most important one. To measure the structural similarity of RNAs and classify them, we propose a novel three dimensional (3D) graphical representation of RNA secondary structure, in which an RNA secondary structure is first transformed into a characteristic sequence based on chemical property of nucleic acids; a dynamic 3D graph is then constructed for the characteristic sequence; and lastly a numerical characterization of the 3D graph is used to represent the RNA secondary structure. We tested our algorithm on three datasets: (1) Dataset I consisting of nine RNA secondary structures of viruses, (2) Dataset II consisting of complex RNA secondary structures including pseudo-knots, and (3) Dataset III consisting of 18 non-coding RNA families. We also compare our method with other nine existing methods using Dataset II and III. The results demonstrate that our method is better than other methods in similarity measurement and classification of RNA secondary structures. PMID:27213271
Embedded morphological dilation coding for 2D and 3D images
NASA Astrophysics Data System (ADS)
Lazzaroni, Fabio; Signoroni, Alberto; Leonardi, Riccardo
2002-01-01
Current wavelet-based image coders obtain high performance thanks to the identification and the exploitation of the statistical properties of natural images in the transformed domain. Zerotree-based algorithms, as Embedded Zerotree Wavelets (EZW) and Set Partitioning In Hierarchical Trees (SPIHT), offer high Rate-Distortion (RD) coding performance and low computational complexity by exploiting statistical dependencies among insignificant coefficients on hierarchical subband structures. Another possible approach tries to predict the clusters of significant coefficients by means of some form of morphological dilation. An example of a morphology-based coder is the Significance-Linked Connected Component Analysis (SLCCA) that has shown performance which are comparable to the zerotree-based coders but is not embedded. A new embedded bit-plane coder is proposed here based on morphological dilation of significant coefficients and context based arithmetic coding. The algorithm is able to exploit both intra-band and inter-band statistical dependencies among wavelet significant coefficients. Moreover, the same approach is used both for two and three-dimensional wavelet-based image compression. Finally we the algorithms are tested on some 2D images and on a medical volume, by comparing the RD results to those obtained with the state-of-the-art wavelet-based coders.
An Adaptive Motion Estimation Scheme for Video Coding
Gao, Yuan; Jia, Kebin
2014-01-01
The unsymmetrical-cross multihexagon-grid search (UMHexagonS) is one of the best fast Motion Estimation (ME) algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV) distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised. PMID:24672313
Interpretation of 3D void measurements with Tripoli4.6/JEFF3.1.1 Monte Carlo code
Blaise, P.; Colomba, A.
2012-07-01
The present work details the first analysis of the 3D void phase conducted during the EPICURE/UM17x17/7% mixed UOX/MOX configuration. This configuration is composed of a homogeneous central 17x17 MOX-7% assembly, surrounded by portions of 17x17 1102 assemblies with guide-tubes. The void bubble is modelled by a small waterproof 5x5 fuel pin parallelepiped box of 11 cm height, placed in the centre of the MOX assembly. This bubble, initially placed at the core mid-plane, is then moved in different axial positions to study the evolution in the core of the axial perturbation. Then, to simulate the growing of this bubble in order to understand the effects of increased void fraction along the fuel pin, 3 and 5 bubbles have been stacked axially, from the core mid-plane. The C/E comparison obtained with the Monte Carlo code Tripoli4 for both radial and axial fission rate distributions, and in particular the reproduction of the very important flux gradients at the void/water interfaces, changing as the bubble is displaced along the z-axis are very satisfactory. It demonstrates both the capability of the code and its library to reproduce this kind of situation, as the very good quality of the experimental results, confirming the UM-17x17 as an excellent experimental benchmark for 3D code validation. This work has been performed within the frame of the V and V program for the future APOLL03 deterministic code of CEA starting in 2012, and its V and V benchmarking database. (authors)
Code and Solution Verification of 3D Numerical Modeling of Flow in the Gust Erosion Chamber
NASA Astrophysics Data System (ADS)
Yuen, A.; Bombardelli, F. A.
2014-12-01
Erosion microcosms are devices commonly used to investigate the erosion and transport characteristics of sediments at the bed of rivers, lakes, or estuaries. In order to understand the results these devices provide, the bed shear stress and flow field need to be accurately described. In this research, the UMCES Gust Erosion Microcosm System (U-GEMS) is numerically modeled using Finite Volume Method. The primary aims are to simulate the bed shear stress distribution at the surface of the sediment core/bottom of the microcosm, and to validate the U-GEMS produces uniform bed shear stress at the bottom of the microcosm. The mathematical model equations are solved by on a Cartesian non-uniform grid. Multiple numerical runs were developed with different input conditions and configurations. Prior to developing the U-GEMS model, the General Moving Objects (GMO) model and different momentum algorithms in the code were verified. Code verification of these solvers was done via simulating the flow inside the top wall driven square cavity on different mesh sizes to obtain order of convergence. The GMO model was used to simulate the top wall in the top wall driven square cavity as well as the rotating disk in the U-GEMS. Components simulated with the GMO model were rigid bodies that could have any type of motion. In addition cross-verification was conducted as results were compared with numerical results by Ghia et al. (1982), and good agreement was found. Next, CFD results were validated by simulating the flow within the conventional microcosm system without suction and injection. Good agreement was found when the experimental results by Khalili et al. (2008) were compared. After the ability of the CFD solver was proved through the above code verification steps. The model was utilized to simulate the U-GEMS. The solution was verified via classic mesh convergence study on four consecutive mesh sizes, in addition to that Grid Convergence Index (GCI) was calculated and based on
Vdovin V.L.
2005-08-15
In this report we describe theory and 3D full wave code description for the wave excitation, propagation and absorption in 3-dimensional (3D) stellarator equilibrium high beta plasma in ion cyclotron frequency range (ICRF). This theory forms a basis for a 3D code creation, urgently needed for the ICRF heating scenarios development for the operated LHD, constructed W7-X, NCSX and projected CSX3 stellarators, as well for re evaluation of ICRF scenarios in operated tokamaks and in the ITER . The theory solves the 3D Maxwell-Vlasov antenna-plasma-conducting shell boundary value problem in the non-orthogonal flux coordinates ({Psi}, {theta}, {var_phi}), {Psi} being magnetic flux function, {theta} and {var_phi} being the poloidal and toroidal angles, respectively. All basic physics, like wave refraction, reflection and diffraction are self consistently included, along with the fundamental ion and ion minority cyclotron resonances, two ion hybrid resonance, electron Landau and TTMP absorption. Antenna reactive impedance and loading resistance are also calculated and urgently needed for an antenna -generator matching. This is accomplished in a real confining magnetic field being varying in a plasma major radius direction, in toroidal and poloidal directions, through making use of the hot dense plasma wave induced currents with account to the finite Larmor radius effects. We expand the solution in Fourier series over the toroidal ({var_phi}) and poloidal ({theta}) angles and solve resulting ordinary differential equations in a radial like {Psi}-coordinate by finite difference method. The constructed discretization scheme is divergent-free one, thus retaining the basic properties of original equations. The Fourier expansion over the angle coordinates has given to us the possibility to correctly construct the ''parallel'' wave number k{sub //}, and thereby to correctly describe the ICRF waves absorption by a hot plasma. The toroidal harmonics are tightly coupled with each
DISCO: A 3D Moving-mesh Magnetohydrodynamics Code Designed for the Study of Astrophysical Disks
NASA Astrophysics Data System (ADS)
Duffell, Paul C.
2016-09-01
This work presents the publicly available moving-mesh magnetohydrodynamics (MHD) code DISCO. DISCO is efficient and accurate at evolving orbital fluid motion in two and three dimensions, especially at high Mach numbers. DISCO employs a moving-mesh approach utilizing a dynamic cylindrical mesh that can shear azimuthally to follow the orbital motion of the gas. The moving mesh removes diffusive advection errors and allows for longer time-steps than a static grid. MHD is implemented in DISCO using an HLLD Riemann solver and a novel constrained transport (CT) scheme that is compatible with the mesh motion. DISCO is tested against a wide variety of problems, which are designed to test its stability, accuracy, and scalability. In addition, several MHD tests are performed which demonstrate the accuracy and stability of the new CT approach, including two tests of the magneto-rotational instability, one testing the linear growth rate and the other following the instability into the fully turbulent regime.
Extension of a three-dimensional viscous wing flow analysis user's manual: VISTA 3-D code
NASA Technical Reports Server (NTRS)
Weinberg, Bernard C.; Chen, Shyi-Yaung; Thoren, Stephen J.; Shamroth, Stephen J.
1990-01-01
Three-dimensional unsteady viscous effects can significantly influence the performance of fixed and rotary wing aircraft. These effects are important in both flows about helicopter rotors in forward flight and flows about three-dimensional (swept and tapered) supercritical wings. A computational procedure for calculating such flow field was developed. The procedure is based upon an alternating direction technique employing the Linearized Block Implicit method for solving three-dimensional viscous flow problems. In order to demonstrate the viability of this method, two- and three-dimensional problems are computed. These include the flow over a two-dimensional NACA 0012 airfoil under steady and oscillating conditions, and the steady, skewed, three-dimensional flow on a flat plate. Although actual three-dimensional flows over wings were not obtained, the ground work was laid for considering such flows. In this report a description of the computer code is given.
Parametric Analysis of a Turbine Trip Event in a BWR Using a 3D Nodal Code
Gorzel, A.
2006-07-01
Two essential thermal hydraulics safety criteria concerning the reactor core are that even during operational transients there is no fuel melting and not-permissible cladding temperatures are avoided. A common concept for boiling water reactors is to establish a minimum critical power ratio (MCPR) for steady state operation. For this MCPR it is shown that only a very small number of fuel rods suffers a short-term dryout during the transient. It is known from experience that the limiting transient for the determination of the MCPR is the turbine trip with blocked bypass system. This fast transient was simulated for a German BWR by use of the three-dimensional reactor analysis transient code SIMULATE-3K. The transient behaviour of the hot channels was used as input for the dryout calculation with the transient thermal hydraulics code FRANCESCA. By this way the maximum reduction of the CPR during the transient could be calculated. The fast increase in reactor power due to the pressure increase and to an increased core inlet flow is limited mainly by the Doppler effect, but automatically triggered operational measures also can contribute to the mitigation of the turbine trip. One very important method is the short-term fast reduction of the recirculation pump speed which is initiated e. g. by a pressure increase in front of the turbine. The large impacts of the starting time and of the rate of the pump speed reduction on the power progression and hence on the deterioration of CPR is presented. Another important procedure to limit the effects of the transient is the fast shutdown of the reactor that is caused when the reactor power reaches the limit value. It is shown that the SCRAM is not fast enough to reduce the first power maximum, but is able to prevent the appearance of a second - much smaller - maximum that would occur around one second after the first one in the absence of a SCRAM. (author)
3D relaxation MHD modeling with FOI-PERFECT code for electromagnetically driven HED systems
NASA Astrophysics Data System (ADS)
Wang, Ganghua; Duan, Shuchao; Xie, Weiping; Kan, Mingxian; Institute of Fluid Physics Collaboration
2015-11-01
One of the challenges in numerical simulations of electromagnetically driven high energy density (HED) systems is the existence of vacuum region. The electromagnetic part of the conventional model adopts the magnetic diffusion approximation (magnetic induction model). The vacuum region is approximated by artificially increasing the resistivity. On one hand the phase/group velocity is superluminal and hence non-physical in the vacuum region, on the other hand a diffusion equation with large diffusion coefficient can only be solved by implicit scheme. Implicit method is usually difficult to parallelize and converge. A better alternative is to solve the full electromagnetic equations for the electromagnetic part. Maxwell's equations coupled with the constitutive equation, generalized Ohm's law, constitute a relaxation model. The dispersion relation is given to show its transition from electromagnetic propagation in vacuum to resistive MHD in plasma in a natural way. The phase and group velocities are finite for this system. A better time stepping is adopted to give a 3rd full order convergence in time domain without the stiff relaxation term restriction. Therefore it is convenient for explicit & parallel computations. Some numerical results of FOI-PERFECT code are also given. Project supported by the National Natural Science Foundation of China (Grant No. 11172277,11205145).
Selective video encryption of a distributed coded bitstream using LDPC codes
NASA Astrophysics Data System (ADS)
Um, Hwayoung; Delp, Edward J.
2006-02-01
Selective encryption is a technique that is used to minimizec omputational complexity or enable system functionality by only encrypting a portion of a compressed bitstream while still achieving reasonable security. For selective encryption to work, we need to rely not only on the beneficial effects of redundancy reduction, but also on the characteristics of the compression algorithm to concentrate important data representing the source in a relatively small fraction of the compressed bitstream. These important elements of the compressed data become candidates for selective encryption. In this paper, we combine encryption and distributed video source coding to consider the choices of which types of bits are most effective for selective encryption of a video sequence that has been compressed using a distributed source coding method based on LDPC codes. Instead of encrypting the entire video stream bit by bit, we encrypt only the highly sensitive bits. By combining the compression and encryption tasks and thus reducing the number of bits encrypted, we can achieve a reduction in system complexity.
Improved video coding efficiency exploiting tree-based pixelwise coding dependencies
NASA Astrophysics Data System (ADS)
Valenzise, Giuseppe; Ortega, Antonio
2010-01-01
In a conventional hybrid video coding scheme, the choice of encoding parameters (motion vectors, quantization parameters, etc.) is carried out by optimizing frame by frame the output distortion for a given rate budget. While it is well known that motion estimation naturally induces a chain of dependencies among pixels, this is usually not explicitly exploited in the coding process in order to improve overall coding efficiency. Specifically, when considering a group of pictures with an IPPP... structure, each pixel of the first frame can be thought of as the root of a tree whose children are the pixels of the subsequent frames predicted by it. In this work, we demonstrate the advantages of such a representation by showing that, in some situations, the best motion vector is not the one that minimizes the energy of the prediction residual, but the one that produces a better tree structure, e.g., one that can be globally more favorable from a rate-distortion perspective. In this new structure, pixel with a larger descendance are allocated extra rate to produce higher quality predictors. As a proof of concept, we verify this assertion by assigning the quantization parameter in a video sequence in such a way that pixels with a larger number of descendants are coded with a higher quality. In this way we are able to improve RD performance by nearly 1 dB. Our preliminary results suggest that a deeper understanding of the temporal dependencies can potentially lead to substantial gains in coding performance.
Coding tools investigation for next generation video coding based on HEVC
NASA Astrophysics Data System (ADS)
Chen, Jianle; Chen, Ying; Karczewicz, Marta; Li, Xiang; Liu, Hongbin; Zhang, Li; Zhao, Xin
2015-09-01
The new state-of-the-art video coding standard, H.265/HEVC, has been finalized in 2013 and it achieves roughly 50% bit rate saving compared to its predecessor, H.264/MPEG-4 AVC. This paper provides the evidence that there is still potential for further coding efficiency improvements. A brief overview of HEVC is firstly given in the paper. Then, our improvements on each main module of HEVC are presented. For instance, the recursive quadtree block structure is extended to support larger coding unit and transform unit. The motion information prediction scheme is improved by advanced temporal motion vector prediction, which inherits the motion information of each small block within a large block from a temporal reference picture. Cross component prediction with linear prediction model improves intra prediction and overlapped block motion compensation improves the efficiency of inter prediction. Furthermore, coding of both intra and inter prediction residual is improved by adaptive multiple transform technique. Finally, in addition to deblocking filter and SAO, adaptive loop filter is applied to further enhance the reconstructed picture quality. This paper describes above-mentioned techniques in detail and evaluates their coding performance benefits based on the common test condition during HEVC development. The simulation results show that significant performance improvement over HEVC standard can be achieved, especially for the high resolution video materials.
Quantization table design revisited for image/video coding.
Yang, En-Hui; Sun, Chang; Meng, Jin
2014-11-01
Quantization table design is revisited for image/video coding where soft decision quantization (SDQ) is considered. Unlike conventional approaches, where quantization table design is bundled with a specific encoding method, we assume optimal SDQ encoding and design a quantization table for the purpose of reconstruction. Under this assumption, we model transform coefficients across different frequencies as independently distributed random sources and apply the Shannon lower bound to approximate the rate distortion function of each source. We then show that a quantization table can be optimized in a way that the resulting distortion complies with certain behavior. Guided by this new design principle, we propose an efficient statistical-model-based algorithm using the Laplacian model to design quantization tables for DCT-based image coding. When applied to standard JPEG encoding, it provides more than 1.5-dB performance gain in PSNR, with almost no extra burden on complexity. Compared with the state-of-the-art JPEG quantization table optimizer, the proposed algorithm offers an average 0.5-dB gain in PSNR with computational complexity reduced by a factor of more than 2000 when SDQ is OFF, and a 0.2-dB performance gain or more with 85% of the complexity reduced when SDQ is ON. Significant compression performance improvement is also seen when the algorithm is applied to other image coding systems proposed in the literature. PMID:25248184
Dynamic 3D shape of the plantar surface of the foot using coded structured light: a technical report
2014-01-01
Background The foot provides a crucial contribution to the balance and stability of the musculoskeletal system, and accurate foot measurements are important in applications such as designing custom insoles/footwear. With better understanding of the dynamic behavior of the foot, dynamic foot reconstruction techniques are surfacing as useful ways to properly measure the shape of the foot. This paper presents a novel design and implementation of a structured-light prototype system providing dense three dimensional (3D) measurements of the foot in motion. The input to the system is a video sequence of a foot during a single step; the output is a 3D reconstruction of the plantar surface of the foot for each frame of the input. Methods Engineering and clinical tests were carried out to test the accuracy and repeatability of the system. Accuracy experiments involved imaging a planar surface from different orientations and elevations and measuring the fitting errors of the data to a plane. Repeatability experiments were done using reconstructions from 27 different subjects, where for each one both right and left feet were reconstructed in static and dynamic conditions over two different days. Results The static accuracy of the system was found to be 0.3 mm with planar test objects. In tests with real feet, the system proved repeatable, with reconstruction differences between trials one week apart averaging 2.4 mm (static case) and 2.8 mm (dynamic case). Conclusion The results obtained in the experiments show positive accuracy and repeatability results when compared to current literature. The design also shows to be superior to the systems available in the literature in several factors. Further studies need to be done to quantify the reliability of the system in clinical environments. PMID:24456711
NASA Astrophysics Data System (ADS)
Mertes, J.; Thomsen, T.; Gulley, J.
2014-12-01
Here we demonstrate the ability to use archived video surveys to create photorealistic 3D models of submerged archeological sites. We created 3D models of two nineteenth century Great Lakes shipwrecks using diver-acquired video surveys and Structure from Motion (SfM) software. Models were georeferenced using archived hand survey data. Comparison of hand survey measurements and digital measurements made using the models demonstrate that spatial analysis produces results with reasonable accuracy when wreck maps are available. Error associated with digital measurements displayed an inverse relationship to object size. Measurement error ranged from a maximum of 18 % (on 0.37 m object) and a minimum of 0.56 % (on a 4.21 m object). Our results demonstrate SfM can generate models of large maritime archaeological sites that for research, education and outreach purposes. Where site maps are available, these 3D models can be georeferenced to allow additional spatial analysis long after on-site data collection.
Self-derivation of motion estimation techniques to improve video coding efficiency
NASA Astrophysics Data System (ADS)
Chiu, Yi-jen; Xu, Lidong; Zhang, Wenhao; Jiang, Hong
2010-08-01
This paper presents the techniques to self derive the motion vectors (MVs) at video decoder side to improve coding efficiency of B pictures. With the MVs information self derived at video decoder side, the transmission of these self-derived MVs from video encoder side to video decoder side is skipped and thus better coding efficiency can be achieved. Our proposed techniques derive the block-based MVs at video decoder side by considering the temporal correlation among the available pixels in the previously-decoded reference pictures. Utilizing the MVs derived at video decoder side can be added as one of coding mode candidates from video encoder where the video encoder can utilize this new coding mode during phase of the coding mode selection to better trade off the rate-distortion performance to improve the coding efficiency. Experiments have demonstrated that the BD bitrate improvement on top of ITU-T/VCEG Key Technology Area (KTA) Reference Software platform with an overall about 7% improvement on the hierarchical IbBbBbBbP coding structure under the common test conditions of the joint call for proposal for the new video coding technology from ISO/MPEG and ITU-T committee on January 2010.
Li, Shengtai; Li, Hui
2012-06-14
We develop a 3D simulation code for interaction between the proto-planetary disk and embedded proto-planets. The protoplanetary disk is treated as a three-dimensional (3D), self-gravitating gas whose motion is described by the locally isothermal Navier-Stokes equations in a spherical coordinate centered on the star. The differential equations for the disk are similar to those given in Kley et al. (2009) with a different gravitational potential that is defined in Nelson et al. (2000). The equations are solved by directional split Godunov method for the inviscid Euler equations plus operator-split method for the viscous source terms. We use a sub-cycling technique for the azimuthal sweep to alleviate the time step restriction. We also extend the FARGO scheme of Masset (2000) and modified in Li et al. (2001) to our 3D code to accelerate the transport in the azimuthal direction. Furthermore, we have implemented a reduced 2D (r, {theta}) and a fully 3D self-gravity solver on our uniform disk grid, which extends our 2D method (Li, Buoni, & Li 2008) to 3D. This solver uses a mode cut-off strategy and combines FFT in the azimuthal direction and direct summation in the radial and meridional direction. An initial axis-symmetric equilibrium disk is generated via iteration between the disk density profile and the 2D disk-self-gravity. We do not need any softening in the disk self-gravity calculation as we have used a shifted grid method (Li et al. 2008) to calculate the potential. The motion of the planet is limited on the mid-plane and the equations are the same as given in D'Angelo et al. (2005), which we adapted to the polar coordinates with a fourth-order Runge-Kutta solver. The disk gravitational force on the planet is assumed to evolve linearly with time between two hydrodynamics time steps. The Planetary potential acting on the disk is calculated accurately with a small softening given by a cubic-spline form (Kley et al. 2009). Since the torque is extremely sensitive to
NASA Astrophysics Data System (ADS)
Yan, X.; Cai, D.; Nishikawa, K.; Lembege, B.
2004-12-01
We made our efforts to parallelize the global 3D HPF Electromagnetic particle model (EMPM) for several years and have also reported our meaningful simulation results that revealed the essential physics involved in interaction of the solar wind with the Earth's magnetosphere using this EMPM (Nishikawa et al., 1995; Nishikawa, 1997, 1998a, b, 2001, 2002) in our PC cluster and supercomputer(D.S. Cai et al., 2001, 2003). Sash patterns and related phenomena have been observed and reported in some satellite observations (Fujumoto et al. 1997; Maynard, 2001), and have motivated 3D MHD simulations (White and al., 1998). We also investigated it with our global 3D parallelized HPF EMPM with dawnward IMF By (K.-I. Nishikawa, 1998) and recently new simulation with dusk-ward IMF By was accomplished in the new VPP5000 supercomputer. In the new simulations performed on the new VPP5000 supercomputer of Tsukuba University, we used larger domain size, 305×205×205, smaller grid size (Δ ), 0.5R E(the radium of the Earth), more total particle number, 220,000,000 (about 8 pairs per cell). At first, we run this code until we get the so-called quasi-stationary status; After the quasi-stationary status was established, we applied a northward IMF (B z=0.2), and then wait until the IMF arrives around the magnetopuase. After the arrival of IMF, we begin to change the IMF from northward to duskward (IMF B y=-0.2). The results revealed that the groove structure at the day-side magnetopause, that causes particle entry into inner magnetosphere and the cross structure or S-structure at near magneto-tail are formed. Moreover, in contrast with MHD simulations, kinetic characteristic of this event is also analyzed self-consistently with this simulation. The new simulation provides new and more detailed insights for the observed sash event.
Adaptive distributed video coding with correlation estimation using expectation propagation
NASA Astrophysics Data System (ADS)
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2012-10-01
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.
NASA Astrophysics Data System (ADS)
Gillespie, K. M.; Speirs, D. C.; Ronald, K.; McConville, S. L.; Phelps, A. D. R.; Bingham, R.; Cross, A. W.; Robertson, C. W.; Whyte, C. G.; He, W.; Vorgul, I.; Cairns, R. A.; Kellett, B. J.
2008-12-01
Auroral Kilometric Radiation (AKR), occurs naturally in the polar regions of the Earth's magnetosphere where electrons are accelerated by electric fields into the increasing planetary magnetic dipole. Here conservation of the magnetic moment converts axial to rotational momentum forming a horseshoe distribution in velocity phase space. This distribution is unstable to cyclotron emission with radiation emitted in the X-mode. In a scaled laboratory reproduction of this process, a 75-85 keV electron beam of 5-40 A was magnetically compressed by a system of solenoids and emissions were observed for cyclotron frequencies of 4.42 GHz and 11.7 GHz resonating with near cut-off TE0,1 and TE0,3 modes, respectively. Here we compare these measurements with numerical predictions from the 3D PiC code KARAT. The 3D simulations accurately predicted the radiation modes and frequencies produced by the experiment. The predicted conversion efficiency between electron kinetic and wave field energy of around 1% is close to the experimental measurements and broadly consistent with quasi-linear theoretical analysis and geophysical observations.
Koniges, A; Eder, E; Liu, W; Barnard, J; Friedman, A; Logan, G; Fisher, A; Masers, N; Bertozzi, A
2011-11-04
The Neutralized Drift Compression Experiment II (NDCX II) is an induction accelerator planned for initial commissioning in 2012. The final design calls for a 3 MeV, Li+ ion beam, delivered in a bunch with characteristic pulse duration of 1 ns, and transverse dimension of order 1 mm. The NDCX II will be used in studies of material in the warm dense matter (WDM) regime, and ion beam/hydrodynamic coupling experiments relevant to heavy ion based inertial fusion energy. We discuss recent efforts to adapt the 3D ALE-AMR code to model WDM experiments on NDCX II. The code, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR), has physics models that include ion deposition, radiation hydrodynamics, thermal diffusion, anisotropic material strength with material time history, and advanced models for fragmentation. Experiments at NDCX-II will explore the process of bubble and droplet formation (two-phase expansion) of superheated metal solids using ion beams. Experiments at higher temperatures will explore equation of state and heavy ion fusion beam-to-target energy coupling efficiency. Ion beams allow precise control of local beam energy deposition providing uniform volumetric heating on a timescale shorter than that of hydrodynamic expansion. The ALE-AMR code does not have any export control restrictions and is currently running at the National Energy Research Scientific Computing Center (NERSC) at LBNL and has been shown to scale well to thousands of CPUs. New surface tension models that are being implemented and applied to WDM experiments. Some of the approaches use a diffuse interface surface tension model that is based on the advective Cahn-Hilliard equations, which allows for droplet breakup in divergent velocity fields without the need for imposed perturbations. Other methods require seeding or other methods for droplet breakup. We also briefly discuss the effects of the move to exascale computing and related
A study of the earth radiation budget using a 3D Monte-Carlo radiative transer code
NASA Astrophysics Data System (ADS)
Okata, M.; Nakajima, T.; Sato, Y.; Inoue, T.; Donovan, D. P.
2013-12-01
The purpose of this study is to evaluate the earth's radiation budget when data are available from satellite-borne active sensors, i.e. cloud profiling radar (CPR) and lidar, and a multi-spectral imager (MSI) in the project of the Earth Explorer/EarthCARE mission. For this purpose, we first developed forward and backward 3D Monte Carlo radiative transfer codes that can treat a broadband solar flux calculation including thermal infrared emission calculation by k-distribution parameters of Sekiguchi and Nakajima (2008). In order to construct the 3D cloud field, we tried the following three methods: 1) stochastic cloud generated by randomized optical thickness each layer distribution and regularly-distributed tilted clouds, 2) numerical simulations by a non-hydrostatic model with bin cloud microphysics model and 3) Minimum cloud Information Deviation Profiling Method (MIDPM) as explained later. As for the method-2 (numerical modeling method), we employed numerical simulation results of Californian summer stratus clouds simulated by a non-hydrostatic atmospheric model with a bin-type cloud microphysics model based on the JMA NHM model (Iguchi et al., 2008; Sato et al., 2009, 2012) with horizontal (vertical) grid spacing of 100m (20m) and 300m (20m) in a domain of 30km (x), 30km (y), 1.5km (z) and with a horizontally periodic lateral boundary condition. Two different cell systems were simulated depending on the cloud condensation nuclei (CCN) concentration. In the case of horizontal resolution of 100m, regionally averaged cloud optical thickness,
Spatio-temporal correlation-based fast coding unit depth decision for high efficiency video coding
NASA Astrophysics Data System (ADS)
Zhou, Chengtao; Zhou, Fan; Chen, Yaowu
2013-10-01
The exhaustive block partition search process in high efficiency video coding (HEVC) imposes a very high computational complexity on test module of HEVC encoder (HM). A fast coding unit (CU) depth algorithm using the spatio-temporal correlation of the depth information to fasten the search process is proposed. The depth of the coding tree unit (CTU) is predicted first by using the depth information of the spatio-temporal neighbor CTUs. Then, the depth information of the adjacent CU is incorporated to skip some specific depths when encoding the sub-CTU. As compared with the original HM encoder, experimental results show that the proposed algorithm can save more than 20% encoding time on average for intra-only, low-delay, low-delay P slices, and random access cases with almost the same rate-distortion performance.
NASA Astrophysics Data System (ADS)
Osorio, Angel; Galan, Juan-Antonio; Nauroy, Julien; Donars, Patricia
2010-02-01
When performing laparoscopies and punctures, the precise anatomic localizations are required. Current techniques very often rely on the mapping between the real situation and preoperative images. The PC based software we present realizes 3D segmentations of regions of interest from CT or MR slices. It allows the planning of punctures or trocars insertion trajectories, taking anatomical constraints into account. Geometrical transformations allow the projection over the patient's body of the organs and lesions shapes, realistically reconstructed, using a standard video projector in the operating room. We developed specific image processing software which automatically segments and registers images of a webcam used in the operating room to give feedback to the user.
Holford, D.J.
1994-01-01
This document is a user`s manual for the Rn3D finite element code. Rn3D was developed to simulate gas flow and radon transport in variably saturated, nonisothermal porous media. The Rn3D model is applicable to a wide range of problems involving radon transport in soil because it can simulate either steady-state or transient flow and transport in one-, two- or three-dimensions (including radially symmetric two-dimensional problems). The porous materials may be heterogeneous and anisotropic. This manual describes all pertinent mathematics related to the governing, boundary, and constitutive equations of the model, as well as the development of the finite element equations used in the code. Instructions are given for constructing Rn3D input files and executing the code, as well as a description of all output files generated by the code. Five verification problems are given that test various aspects of code operation, complete with example input files, FORTRAN programs for the respective analytical solutions, and plots of model results. An example simulation is presented to illustrate the type of problem Rn3D is designed to solve. Finally, instructions are given on how to convert Rn3D to simulate systems other than radon, air, and water.
Low Complexity Mode Decision for 3D-HEVC
Li, Nana; Gan, Yong
2014-01-01
High efficiency video coding- (HEVC-) based 3D video coding (3D-HEVC) developed by joint collaborative team on 3D video coding (JCT-3V) for multiview video and depth map is an extension of HEVC standard. In the test model of 3D-HEVC, variable coding unit (CU) size decision and disparity estimation (DE) are introduced to achieve the highest coding efficiency with the cost of very high computational complexity. In this paper, a fast mode decision algorithm based on variable size CU and DE is proposed to reduce 3D-HEVC computational complexity. The basic idea of the method is to utilize the correlations between depth map and motion activity in prediction mode where variable size CU and DE are needed, and only in these regions variable size CU and DE are enabled. Experimental results show that the proposed algorithm can save about 43% average computational complexity of 3D-HEVC while maintaining almost the same rate-distortion (RD) performance. PMID:25254237
3D-Reconstruction of recent volcanic activity from ROV-video, Charles Darwin Seamounts, Cape Verdes
NASA Astrophysics Data System (ADS)
Kwasnitschka, T.; Hansteen, T. H.; Kutterolf, S.; Freundt, A.; Devey, C. W.
2011-12-01
As well as providing well-localized samples, Remotely Operated Vehicles (ROVs) produce huge quantities of visual data whose potential for geological data mining has seldom if ever been fully realized. We present a new workflow to derive essential results of field geology such as quantitative stratigraphy and tectonic surveying from ROV-based photo and video material. We demonstrate the procedure on the Charles Darwin Seamounts, a field of small hot spot volcanoes recently identified at a depth of ca. 3500m southwest of the island of Santo Antao in the Cape Verdes. The Charles Darwin Seamounts feature a wide spectrum of volcanic edifices with forms suggestive of scoria cones, lava domes, tuff rings and maar-type depressions, all of comparable dimensions. These forms, coupled with the highly fragmented volcaniclastic samples recovered by dredging, motivated surveying parts of some edifices down to centimeter scale. ROV-based surveys yielded volcaniclastic samples of key structures linked by extensive coverage of stereoscopic photographs and high-resolution video. Based upon the latter, we present our workflow to derive three-dimensional models of outcrops from a single-camera video sequence, allowing quantitative measurements of fault orientation, bedding structure, grain size distribution and photo mosaicking within a geo-referenced framework. With this information we can identify episodes of repetitive eruptive activity at individual volcanic centers and see changes in eruptive style over time, which, despite their proximity to each other, is highly variable.
NASA Astrophysics Data System (ADS)
Pletinckx, D.
2011-09-01
The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.
MiR-10a* up-regulates coxsackievirus B3 biosynthesis by targeting the 3D-coding sequence
Tong, Lei; Lin, Lexun; Wu, Shuo; Guo, Zhiwei; Wang, Tianying; Qin, Ying; Wang, Ruixue; Zhong, Xiaoyan; Wu, Xia; Wang, Yan; Luan, Tian; Wang, Qiang; Li, Yunxia; Chen, Xiaofeng; Zhang, Fengmin; Zhao, Wenran; Zhong, Zhaohua
2013-01-01
MicroRNAs (miRNAs) are small non-coding RNAs that can posttranscriptionally regulate gene expression by targeting messenger RNAs. During miRNA biogenesis, the star strand (miRNA*) is generally degraded to a low level in the cells. However, certain miRNA* express abundantly and can be recruited into the silencing complex to regulate gene expression. Most miRNAs function as suppressive regulators on gene expression. Group B coxsackieviruses (CVB) are the major pathogens of human viral myocarditis and dilated cardiomyopathy. CVB genome is a positive-sense, single-stranded RNA. Our previous study shows that miR-342-5p can suppress CVB biogenesis by targeting its 2C-coding sequence. In this study, we found that the miR-10a duplex could significantly up-regulate the biosynthesis of CVB type 3 (CVB3). Further study showed that it was the miR-10a star strand (miR-10a*) that augmented CVB3 biosynthesis. Site-directed mutagenesis showed that the miR-10a* target was located in the nt6818–nt6941 sequence of the viral 3D-coding region. MiR-10a* was detectable in the cardiac tissues of suckling Balb/c mice, suggesting that miR-10a* may impact CVB3 replication during its cardiac infection. Taken together, these data for the first time show that miRNA* can positively modulate gene expression. MiR-10a* might be involved in the CVB3 cardiac pathogenesis. PMID:23389951
NASA Astrophysics Data System (ADS)
Ghosh, Shila; Chatterji, B. N.
2007-09-01
A theoretical investigation to evaluate the performance of optical code division multiple access (OCDMA) for compressed video transmission is shown. OCDMA has many advantages than a typical synchronous protocol time division multiple access (TDMA). A pulsed laser transmission of multi channel digital video can be done using various techniques depending on whether the multi channel data are to be synchronous or asynchronous. A typical form of asynchronous digital operation is wavelength division multiplexing (WDM) in which the digital data of each video source are assigned a specific and separate wavelength. A sophisticated hardware such as accurate wavelength control of all lasers and tunable narrow band optical filters at the receivers is required in this case. A major disadvantage with CDMA is the reduction in per channel data rate (relative to the speeds available in the laser itself) that occurs in the insertion of code addressing. Hence optical CDMA for the video transmission application is meaningful when individual channel video bit rates can be significantly reduced and that can be done by compression of video data. In our work for compression of video image standard JPEG is implemented where a compression ratio of about 60 % is obtained without noticeable image degradation. Compared to the other existing techniques JPEG standard achieves higher compression ration with high S/N ratio. Here we demonstrated the auto and cross correlation properties of the codes. We have shown the implementation of bipolar Walsh coding in OCDMA system and their use in transmission of image/video.
NASA Astrophysics Data System (ADS)
Tsujimura, T., Ii; Kubo, S.; Takahashi, H.; Makino, R.; Seki, R.; Yoshimura, Y.; Igami, H.; Shimozuma, T.; Ida, K.; Suzuki, C.; Emoto, M.; Yokoyama, M.; Kobayashi, T.; Moon, C.; Nagaoka, K.; Osakabe, M.; Kobayashi, S.; Ito, S.; Mizuno, Y.; Okada, K.; Ejiri, A.; Mutoh, T.
2015-11-01
The central electron temperature has successfully reached up to 7.5 keV in large helical device (LHD) plasmas with a central high-ion temperature of 5 keV and a central electron density of 1.3× {{10}19} m-3. This result was obtained by heating with a newly-installed 154 GHz gyrotron and also the optimisation of injection geometry in electron cyclotron heating (ECH). The optimisation was carried out by using the ray-tracing code ‘LHDGauss’, which was upgraded to include the rapid post-processing three-dimensional (3D) equilibrium mapping obtained from experiments. For ray-tracing calculations, LHDGauss can automatically read the relevant data registered in the LHD database after a discharge, such as ECH injection settings (e.g. Gaussian beam parameters, target positions, polarisation and ECH power) and Thomson scattering diagnostic data along with the 3D equilibrium mapping data. The equilibrium map of the electron density and temperature profiles are then extrapolated into the region outside the last closed flux surface. Mode purity, or the ratio between the ordinary mode and the extraordinary mode, is obtained by calculating the 1D full-wave equation along the direction of the rays from the antenna to the absorption target point. Using the virtual magnetic flux surfaces, the effects of the modelled density profiles and the magnetic shear at the peripheral region with a given polarisation are taken into account. Power deposition profiles calculated for each Thomson scattering measurement timing are registered in the LHD database. The adjustment of the injection settings for the desired deposition profile from the feedback provided on a shot-by-shot basis resulted in an effective experimental procedure.
Design and implementation of H.264 based embedded video coding technology
NASA Astrophysics Data System (ADS)
Mao, Jian; Liu, Jinming; Zhang, Jiemin
2016-03-01
In this paper, an embedded system for remote online video monitoring was designed and developed to capture and record the real-time circumstances in elevator. For the purpose of improving the efficiency of video acquisition and processing, the system selected Samsung S5PV210 chip as the core processor which Integrated graphics processing unit. And the video was encoded with H.264 format for storage and transmission efficiently. Based on S5PV210 chip, the hardware video coding technology was researched, which was more efficient than software coding. After running test, it had been proved that the hardware video coding technology could obviously reduce the cost of system and obtain the more smooth video display. It can be widely applied for the security supervision [1].
NASA Astrophysics Data System (ADS)
Chen, Zhenzhong; Han, Junwei; Ngan, King Ngi
2005-10-01
MPEG-4 treats a scene as a composition of several objects or so-called video object planes (VOPs) that are separately encoded and decoded. Such a flexible video coding framework makes it possible to code different video object with different distortion scale. It is necessary to analyze the priority of the video objects according to its semantic importance, intrinsic properties and psycho-visual characteristics such that the bit budget can be distributed properly to video objects to improve the perceptual quality of the compressed video. This paper aims to provide an automatic video object priority definition method based on object-level visual attention model and further propose an optimization framework for video object bit allocation. One significant contribution of this work is that the human visual system characteristics are incorporated into the video coding optimization process. Another advantage is that the priority of the video object can be obtained automatically instead of fixing weighting factors before encoding or relying on the user interactivity. To evaluate the performance of the proposed approach, we compare it with traditional verification model bit allocation and the optimal multiple video object bit allocation algorithms. Comparing with traditional bit allocation algorithms, the objective quality of the object with higher priority is significantly improved under this framework. These results demonstrate the usefulness of this unsupervised subjective quality lifting framework.
NASA Astrophysics Data System (ADS)
Dolgoff, Eugene
1997-05-01
Floating Images, Inc. has developed the software and hardware for a new, patent pending, 'floating 3-D, off-the-screen- experience' display technology. This technology has the potential to become the next standard for home and arcade video games, computers, corporate presentations, Internet/Intranet viewing, and television. Current '3-D graphics' technologies are actually flat on screen. Floating ImagesTM technology actually produce images at different depths from any display, such as CRT and LCD, for television, computer, projection, and other formats. In addition, unlike stereoscopic 3-D imaging, no glasses, headgear, or other viewing aids are used. And, unlike current autostereoscopic imaging technologies, there is virtually no restriction on where viewers can sit to view the images, with no 'bad' or 'dead' zones, flipping, or pseudoscopy. In addition to providing traditional depth cues such as perspective and background image occlusion, the new technology also provides both horizontal and vertical binocular parallax (the ability to look around foreground objects to see previously hidden background objects, with each eye seeing a different view at all times) and accommodation (the need to re-focus one's eyes when shifting attention from a near object to a distant object) which coincides with convergence (the need to re-aim one's eyes when shifting attention from a near object to a distant object). Since accommodation coincides with convergence, viewing these images doesn't produce headaches, fatigue, or eye-strain, regardless of how long they are viewed (unlike stereoscopic and autostereoscopic displays). The imagery (video or computer generated) must either be formatted for the Floating ImagesTM platform when written or existing software can be re-formatted without much difficulty.
Semi-automatic 2D-to-3D conversion of human-centered videos enhanced by age and gender estimation
NASA Astrophysics Data System (ADS)
Fard, Mani B.; Bayazit, Ulug
2014-01-01
In this work, we propose a feasible 3D video generation method to enable high quality visual perception using a monocular uncalibrated camera. Anthropometric distances between face standard landmarks are approximated based on the person's age and gender. These measurements are used in a 2-stage approach to facilitate the construction of binocular stereo images. Specifically, one view of the background is registered in initial stage of video shooting. It is followed by an automatically guided displacement of the camera toward its secondary position. At the secondary position the real-time capturing is started and the foreground (viewed person) region is extracted for each frame. After an accurate parallax estimation the extracted foreground is placed in front of the background image that was captured at the initial position. So the constructed full view of the initial position combined with the view of the secondary (current) position, form the complete binocular pairs during real-time video shooting. The subjective evaluation results present a competent depth perception quality through the proposed system.
NASA Astrophysics Data System (ADS)
Cai, D.; Yan, X.; Lembege, B.; Nishikawa, K.
2003-12-01
We report a new progress in the long-term effort to represent the global interaction of the solar wind with the Earth's magnetosphere using a three-dimensional electromagnetic particle code with the improved resolutions using the HPF Tristan code. After a quasi-steady state is established with an unmagnetized solar wind we gradually switch on a northward interplanetary magnetic field (IMF), which causes a magnetic reconnection at the nightside cusps and the magnetosphere to be depolarized. In the case that the northward IMF is switched gradually to dawnward, there is no signature of reconnection in the near-Earth magnetotail as in the case with the southward turning. On the contrary analysis of magnetic fields in the magnetopause confirms a signature of magnetic reconnection at both the dawnside and duskside. And the plasma sheet in the near-Earth magnetotail clearly thins as in the case of southward turning. Arrival of dawnward IMF to the magnetopause creates a reconnection groove which cause particle entry into the deep region of the magnetosphere via field lines that go near the magnetopause. This deep connection is more fully recognized tailward of Earth. The flank weak-field fan joins onto the plasma sheet and the current sheet to form a geometrical feature called the cross-tail S that structurally integrates the magnetopause and the tail interior. This structure contributes to direct plasma entry between the magnetosheath to the inner magnetosphere and plasma sheet, in which the entry process heats the magnetosheath plasma to plasma sheet temperatures. These phenomena have been found by Cluster observations. Further investigation with Cluster observations will provide new insights for unsolved problems such as hot flow anomalies (HFAs), substorms, and storm-substorm relationship. 3-D movies with sash structure will be presented at the meeting.
Mahe, Charly; Chabal, Caroline
2013-07-01
The CEA has developed many compact characterization tools to follow sensitive operations in a nuclear environment. Usually, these devices are made to carry out radiological inventories, to prepare nuclear interventions or to supervise some special operations. These in situ measurement techniques mainly take place at different stages of clean-up operations and decommissioning projects, but they are also in use to supervise sensitive operations when the nuclear plant is still operating. In addition to this, such tools are often associated with robots to access very highly radioactive areas, and thus can be used in accident situations. Last but not least, the radiological data collected can be entered in 3D calculation codes used to simulate the doses absorbed by workers in real time during operations in a nuclear environment. Faced with these ever-greater needs, nuclear measurement instrumentation always has to involve on-going improvement processes. Firstly, this paper will describe the latest developments and results obtained in both gamma and alpha imaging techniques. The gamma camera has been used by the CEA since the 1990's and several changes have made this device more sensitive, more compact and more competitive for nuclear plant operations. It is used to quickly identify hot spots, locating irradiating sources from 50 keV to 1500 keV. Several examples from a wide field of applications will be presented, together with the very latest developments. The alpha camera is a new camera used to see invisible alpha contamination on several kinds of surfaces. The latest results obtained allow real time supervision of a glove box cleaning operation (for {sup 241}Am contamination). The detection principle as well as the main trials and results obtained will be presented. Secondly, this paper will focus on in situ gamma spectrometry methods developed by the CEA with compact gamma spectrometry probes (CdZnTe, LaBr{sub 3}, NaI, etc.). The radiological data collected is used
NASA Astrophysics Data System (ADS)
Davis, A. B.; Cahalan, R. F.
2001-05-01
The Intercomparison of 3D Radiation Codes (I3RC) is an on-going initiative involving an international group of over 30 researchers engaged in the numerical modeling of three-dimensional radiative transfer as applied to clouds. Because of their strong variability and extreme opacity, clouds are indeed a major source of uncertainty in the Earth's local radiation budget (at GCM grid scales). Also 3D effects (at satellite pixel scales) invalidate the standard plane-parallel assumption made in the routine of cloud-property remote sensing at NASA and NOAA. Accordingly, the test-cases used in I3RC are based on inputs and outputs which relate to cloud effects in atmospheric heating rates and in real-world remote sensing geometries. The main objectives of I3RC are to (1) enable participants to improve their models, (2) publish results as a community, (3) archive source code, and (4) educate. We will survey the status of I3RC and its plans for the near future with a special emphasis on the mathematical models and computational approaches. We will also describe some of the prime applications of I3RC's efforts in climate models, cloud-resolving models, and remote-sensing observations of clouds, or that of the surface in their presence. In all these application areas, computational efficiency is the main concern and not accuracy. One of I3RC's main goals is to document the performance of as wide a variety as possible of three-dimensional radiative transfer models for a small but representative number of ``cases.'' However, it is dominated by modelers working at the level of linear transport theory (i.e., they solve the radiative transfer equation) and an overwhelming majority of these participants use slow-but-robust Monte Carlo techniques. This means that only a small portion of the efficiency vs. accuracy vs. flexibility domain is currently populated by I3RC participants. To balance this natural clustering the present authors have organized a systematic outreach towards
Application of the Finite Orbit Width Version of the CQL3D Code to NBI +RF Heating of NSTX Plasma
NASA Astrophysics Data System (ADS)
Petrov, Yu. V.; Harvey, R. W.
2015-11-01
The CQL3D bounce-averaged Fokker-Planck (FP) code has been upgraded to include Finite-Orbit-Width (FOW) effects. The calculations can be done either with a fast Hybrid-FOW option or with a slower but neoclassically complete full-FOW option. The banana regime neoclassical radial transport appears naturally in the full-FOW version by averaging the local collision coefficients along guiding center orbits, with a proper transformation matrix from local (R, Z) coordinates to the midplane computational coordinates, where the FP equation is solved. In a similar way, the local quasilinear rf diffusion terms give rise to additional radial transport of orbits. The full-FOW version is applied to simulation of ion heating in NSTX plasma. It is demonstrated that it can describe the physics of transport phenomena in plasma with auxiliary heating, in particular, the enhancement of the radial transport of ions by RF heating and the occurrence of the bootstrap current. Because of the bounce-averaging on the FPE, the results are obtained in a relatively short computational time. A typical full-FOW run time is 30 min using 140 MPI cores. Due to an implicit solver, calculations with a large time step (tested up to dt = 0.5 sec) remain stable. Supported by USDOE grants SC0006614, ER54744, and ER44649.
Lucas, Joseph S.; Zhang, Yaojun; Dudko, Olga K.; Murre, Cornelis
2014-01-01
SUMMARY During B lymphocyte development, immunoglobulin heavy chain variable (VH), diversity (DH) and joining (JH) segments assemble to generate a diverse antigen receptor repertoire. Here we have marked the distal VH and DH-JH-Eμ regions with Tet-operator binding sites and traced their 3D-trajectories in pro-B cells transduced with a retrovirus encoding Tet-repressor-EGFP. We found that these elements displayed fractional Langevin motion (fLm) due to the viscoelastic hindrance from the surrounding network of proteins and chromatin fibers. Using fractional Langevin dynamics modeling, we found that, with high probability, DHJH elements reach a VH element within minutes. Spatial confinement emerged as the dominant parameter that determined the frequency of such encounters. We propose that the viscoelastic nature of the nuclear environment causes coding elements and regulatory elements to bounce back and forth in a spring-like fashion until specific genomic interactions are established and that spatial confinement of topological domains largely controls first-passage times for genomic interactions. PMID:24998931
Whirley, R.G.; Engelmann, B.E.
1993-11-01
This report is the User Manual for the 1993 version of DYNA3D, and also serves as a User Guide. DYNA3D is a nonlinear, explicit, finite element code for analyzing the transient dynamic response of three-dimensional solids and structures. The code is fully vectorized and is available on several computer platforms. DYNA3D includes solid, shell, beam, and truss elements to allow maximum flexibility in modeling physical problems. Many material models are available to represent a wide range of material behavior, including elasticity, plasticity, composites, thermal effects, and rate dependence. In addition, DYNA3D has a sophisticated contact interface capability, including frictional sliding and single surface contact. Rigid materials provide added modeling flexibility. A material model driver with interactive graphics display is incorporated into DYNA3D to permit accurate modeling of complex material response based on experimental data. Along with the DYNA3D Example Problem Manual, this document provides the information necessary to apply DYNA3D to solve a wide range of engineering analysis problems.
Scholten, Hanneke; Malmberg, Monique; Lobel, Adam; Engels, Rutger C. M. E.; Granic, Isabela
2016-01-01
Adolescent anxiety is debilitating, the most frequently diagnosed adolescent mental health problem, and leads to substantial long-term problems. A randomized controlled trial (n = 138) was conducted to test the effectiveness of a biofeedback video game (Dojo) for adolescents with elevated levels of anxiety. Adolescents (11–15 years old) were randomly assigned to play Dojo or a control game (Rayman 2: The Great Escape). Initial screening for anxiety was done on 1,347 adolescents in five high schools; only adolescents who scored above the “at-risk” cut-off on the Spence Children Anxiety Survey were eligible. Adolescents’ anxiety levels were assessed at pre-test, post-test, and at three month follow-up to examine the extent to which playing Dojo decreased adolescents’ anxiety. The present study revealed equal improvements in anxiety symptoms in both conditions at follow-up and no differences between Dojo and the closely matched control game condition. Latent growth curve models did reveal a steeper decrease of personalized anxiety symptoms (not of total anxiety symptoms) in the Dojo condition compared to the control condition. Moderation analyses did not show any differences in outcomes between boys and girls nor did age differentiate outcomes. The present results are of importance for prevention science, as this was the first full-scale randomized controlled trial testing indicated prevention effects of a video game aimed at reducing anxiety. Future research should carefully consider the choice of control condition and outcome measurements, address the potentially high impact of participants’ expectations, and take critical design issues into consideration, such as individual- versus group-based intervention and contamination issues. PMID:26816292
Scholten, Hanneke; Malmberg, Monique; Lobel, Adam; Engels, Rutger C M E; Granic, Isabela
2016-01-01
Adolescent anxiety is debilitating, the most frequently diagnosed adolescent mental health problem, and leads to substantial long-term problems. A randomized controlled trial (n = 138) was conducted to test the effectiveness of a biofeedback video game (Dojo) for adolescents with elevated levels of anxiety. Adolescents (11-15 years old) were randomly assigned to play Dojo or a control game (Rayman 2: The Great Escape). Initial screening for anxiety was done on 1,347 adolescents in five high schools; only adolescents who scored above the "at-risk" cut-off on the Spence Children Anxiety Survey were eligible. Adolescents' anxiety levels were assessed at pre-test, post-test, and at three month follow-up to examine the extent to which playing Dojo decreased adolescents' anxiety. The present study revealed equal improvements in anxiety symptoms in both conditions at follow-up and no differences between Dojo and the closely matched control game condition. Latent growth curve models did reveal a steeper decrease of personalized anxiety symptoms (not of total anxiety symptoms) in the Dojo condition compared to the control condition. Moderation analyses did not show any differences in outcomes between boys and girls nor did age differentiate outcomes. The present results are of importance for prevention science, as this was the first full-scale randomized controlled trial testing indicated prevention effects of a video game aimed at reducing anxiety. Future research should carefully consider the choice of control condition and outcome measurements, address the potentially high impact of participants' expectations, and take critical design issues into consideration, such as individual- versus group-based intervention and contamination issues. PMID:26816292
NASA Astrophysics Data System (ADS)
Karwowski, Damian; Domański, Marek
2016-01-01
An improved context-based adaptive binary arithmetic coding (CABAC) is presented. The idea for the improvement is to use a more accurate mechanism for estimation of symbol probabilities in the standard CABAC algorithm. The authors' proposal of such a mechanism is based on the context-tree weighting technique. In the framework of a high-efficiency video coding (HEVC) video encoder, the improved CABAC allows 0.7% to 4.5% bitrate saving compared to the original CABAC algorithm. The application of the proposed algorithm marginally affects the complexity of HEVC video encoder, but the complexity of video decoder increases by 32% to 38%. In order to decrease the complexity of video decoding, a new tool has been proposed for the improved CABAC that enables scaling of the decoder complexity. Experiments show that this tool gives 5% to 7.5% reduction of the decoding time while still maintaining high efficiency in the data compression.
Comparison of Video Coding Methods to Pay Attention in Anchoring Effect
NASA Astrophysics Data System (ADS)
Imaizumi, Keisuke; Sugiura, Akihiko
In this study, we propose using the anchoring effect that is one of the cognitive bias as a new approach of the encoding. And we suggest technique to apply to encoding. As a result of experiments, we found that displaying High-definition image in the early part of video effects look clear than original video. In addition we noticed that the anchoring effect appear remarkably in a low rate video coding. And if changes the video rate is smoothly, the anchoring effect is shown clearness in a high average rate video.
3D Hydrodynamic Simulations with Yguazú-A Code to Model a Jet in a Galaxy Cluster
NASA Astrophysics Data System (ADS)
Haro-Corzo, S. A. R.; Velazquez, P.; Diaz, A.
2009-05-01
We present preliminary results for a galaxy's jet expanding into an intra-cluster medium (ICM). We attempt to model the jet-gas interaction and the evolution of a extragalactic collimated jet placed at center of computational grid, which it is modeled as a cylinder ejecting gas in the z-axis direction with fixed velocity. It has precession motion around z-axis (period of 10^5 sec.) and orbital motion in XY-plane (period of 500 yr.). This jet is embedded in the ICM, which is modeled as surrounding wind in the XZ plane. We carried out 3D hydrodynamical simulations using Yguazú-A code. This simulation do not include radiative losses. In order to compare the numerical results with observations, we generated synthetic X-ray emission images. X-ray observations with high-resolution of rich cluster of galaxies show diffuse emission with filamentary structure (sometimes called as cooling flow or X-ray filament). Radio observations show a jet-like emission of the central region of the cluster. Joining these observations, in this work we explore the possibility that the jet-ambient gas interaction leads to a filamentary morphology in the X-ray domain. We have found that simulation considering orbital motion offers the possibility to explain the diffuse emission observed in the X-ray domain. The circular orbital motion, additional to precession motion, contribute to disperse the shocked gas and the X-ray appearance of the 3D simulation reproduce some important details of Abel 1795 X-ray emission (Rodriguez-Martinez et al. 2006, A&A, 448, 15): A bright bow-shock at north (spot), where interact directly the jet and the ICM and which is observed in the X-ray image. Meanwhile, in the south side there is no bow-shock X-ray emission, but the wake appears as a X-ray source. This wake is part of the diffuse shocked ambient gas region.
NASA Technical Reports Server (NTRS)
Topol, David A.
1999-01-01
TFaNS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFaNS consists of: The codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. Cup3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report provides technical background for TFaNS including the organization of the system and CUP3D technical documentation. This document also provides information for code developers who must write Acoustic Property Files in the CUP3D format. This report is divided into three volumes: Volume I: System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume II: User's Manual, TFaNS Vers. 1.4; Volume III: Evaluation of System Codes.
Test Problems for Reactive Flow HE Model in the ALE3D Code and Limited Sensitivity Study
Gerassimenko, M.
2000-03-01
We document quick running test problems for a reactive flow model of HE initiation incorporated into ALE3D. A quarter percent change in projectile velocity changes the outcome from detonation to HE burn that dies down. We study the sensitivity of calculated HE behavior to several parameters of practical interest where modeling HE initiation with ALE3D.
Chroma sampling and modulation techniques in high dynamic range video coding
NASA Astrophysics Data System (ADS)
Dai, Wei; Krishnan, Madhu; Topiwala, Pankaj
2015-09-01
High Dynamic Range and Wide Color Gamut (HDR/WCG) Video Coding is an area of intense research interest in the engineering community, for potential near-term deployment in the marketplace. HDR greatly enhances the dynamic range of video content (up to 10,000 nits), as well as broadens the chroma representation (BT.2020). The resulting content offers new challenges in its coding and transmission. The Moving Picture Experts Group (MPEG) of the International Standards Organization (ISO) is currently exploring coding efficiency and/or the functionality enhancements of the recently developed HEVC video standard for HDR and WCG content. FastVDO has developed an advanced approach to coding HDR video, based on splitting the HDR signal into a smoothed luminance (SL) signal, and an associated base signal (B). Both signals are then chroma downsampled to YFbFr 4:2:0 signals, using advanced resampling filters, and coded using the Main10 High Efficiency Video Coding (HEVC) standard, which has been developed jointly by ISO/IEC MPEG and ITU-T WP3/16 (VCEG). Our proposal offers both efficient coding, and backwards compatibility with the existing HEVC Main10 Profile. That is, an existing Main10 decoder can produce a viewable standard dynamic range video, suitable for existing screens. Subjective tests show visible improvement over the anchors. Objective tests show a sizable gain of over 25% in PSNR (RGB domain) on average, for a key set of test clips selected by the ISO/MPEG committee.
DCT/DST-based transform coding for intra prediction in image/video coding.
Saxena, Ankur; Fernandes, Felix C
2013-10-01
In this paper, we present a DCT/DST based transform scheme that applies either the conventional DCT or type-7 DST for all the video-coding intra-prediction modes: vertical, horizontal, and oblique. Our approach is applicable to any block-based intra prediction scheme in a codec that employs transforms along the horizontal and vertical direction separably. Previously, Han, Saxena, and Rose showed that for the intra-predicted residuals of horizontal and vertical modes, the DST is the optimal transform with performance close to the KLT. Here, we prove that this is indeed the case for the other oblique modes. The optimal choice of using DCT or DST is based on intra-prediction modes and requires no additional signaling information or rate-distortion search. The DCT/DST scheme presented in this paper was adopted in the HEVC standardization in March 2011. Further simplifications, especially to reduce implementation complexity, which remove the mode-dependency between DCT and DST, and simply always use DST for the 4 × 4 intra luma blocks, were adopted in the HEVC standard in July 2012. Simulation results conducted for the DCT/DST algorithm are shown in the reference software for the ongoing HEVC standardization. Our results show that the DCT/DST scheme provides significant BD-rate improvement over the conventional DCT based scheme for intra prediction in video sequences. PMID:23744679
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.
1990-01-01
The development and applications of multiblock/multizone and adaptive grid methodologies for solving the three-dimensional simplified Navier-Stokes equations are described. Adaptive grid and multiblock/multizone approaches are introduced and applied to external and internal flow problems. These new implementations increase the capabilities and flexibility of the PAB3D code in solving flow problems associated with complex geometry.
Numerical model of water flow and solute accumulation in vertisols using HYDRUS 2D/3D code
NASA Astrophysics Data System (ADS)
Weiss, Tomáš; Dahan, Ofer; Turkeltub, Tuvia
2015-04-01
boundary to the wall of the crack (so that the solute can accumulate due to evaporation on the crack block wall, and infiltrating fresh water can push the solute further down) - in order to do so, HYDRUS 2D/3D code had to be modified by its developers. Unconventionally, the main fitting parameters were: parameter a and n in the soil water retention curve and saturated hydraulic conductivity. The amount of infiltrated water (within a reasonable range), the infiltration function in the crack and the actual evaporation from the crack were also used as secondary fitting parameters. The model supports the previous findings that significant amount (~90%) of water from rain events must infiltrate through the crack. It was also noted that infiltration from the crack has to be increasing with depth and that the highest infiltration rate should be somewhere between 1-3m. This paper suggests a new way how to model vertisols in semi-arid regions. It also supports the previous findings about vertisols: especially, the utmost importance of soil cracks as preferential pathways for water and contaminants and soil cracks as deep evaporators.
Semi-fixed-length motion vector coding for H.263-based low bit rate video compression.
Côté, G; Gallant, M; Kossentini, F
1999-01-01
We present a semi-fixed-length motion vector coding method for H.263-based low bit rate video compression. The method exploits structural constraints within the motion field. The motion vectors are encoded using semi-fixed-length codes, yielding essentially the same levels of rate-distortion performance and subjective quality achieved by H.263's Huffman-based variable length codes in a noiseless environment. However, such codes provide substantially higher error resilience in a noisy environment. PMID:18267417
Tradeoff between picture resolution and quantization precision in video coding for embedded systems
NASA Astrophysics Data System (ADS)
Yuan, Yu; Feng, David; Zhong, Yuzhuo
2004-01-01
In embedded multimedia applications, improving video quality under constraints of bandwidth and storage is an important problem. In this paper, we discuss the relationship among picture resolution, quantization precision and subjective quality in video coding for embedded systems. Then we propose a principle of tradeoff between picture resolution and quantization precision. Video coding based on the tradeoff principle can achieve higher subjective quality at low bitrates, and significantly reduce the burden of decoders. Experimental results on both MPEG-2 codec and H.264 codec prove that the tradeoff principle is valuable and feasible for embedded systems.
NASA Astrophysics Data System (ADS)
Tyldesley, Katherine S.; Abousleman, Glen P.; Karam, Lina J.
2003-08-01
This paper presents an error-resilient wavelet-based multiple description video coding scheme for the transmission of video over wireless channels. The proposed video coding scheme has been implemented and successfully tested over the wireless Iridium satellite communication network. As a test bed for the develope dcodec, we also present an inverse multiplexing unit that simultaneously combines several Iridium channels to form an effective higher-rate channel, where the total bandwidth is directly proportional to the number of channels combined. The developed unit can be integrated into a variety of systems such as ISR sensors, aircraft, vehicles, ships, and end user terminals (EUTs), or can operate as a standalone device. The recombination of the multi-channel unit with our proposed multi-channel video codec facilitates global and on-the-move video communications without reliance on any terrestrial or airborne infrastructure whatsoever.
Low-Complexity Saliency Detection Algorithm for Fast Perceptual Video Coding
Liu, Pengyu; Jia, Kebin
2013-01-01
A low-complexity saliency detection algorithm for perceptual video coding is proposed; low-level encoding information is adopted as the characteristics of visual perception analysis. Firstly, this algorithm employs motion vector (MV) to extract temporal saliency region through fast MV noise filtering and translational MV checking procedure. Secondly, spatial saliency region is detected based on optimal prediction mode distributions in I-frame and P-frame. Then, it combines the spatiotemporal saliency detection results to define the video region of interest (VROI). The simulation results validate that the proposed algorithm can avoid a large amount of computation work in the visual perception characteristics analysis processing compared with other existing algorithms; it also has better performance in saliency detection for videos and can realize fast saliency detection. It can be used as a part of the video standard codec at medium-to-low bit-rates or combined with other algorithms in fast video coding. PMID:24489495
Instantly decodable network coding for real-time scalable video broadcast over wireless networks
NASA Astrophysics Data System (ADS)
Karim, Mohammad S.; Sadeghi, Parastoo; Sorour, Sameh; Aboutorab, Neda
2016-01-01
In this paper, we study real-time scalable video broadcast over wireless networks using instantly decodable network coding (IDNC). Such real-time scalable videos have hard deadline and impose a decoding order on the video layers. We first derive the upper bound on the probability that the individual completion times of all receivers meet the deadline. Using this probability, we design two prioritized IDNC algorithms, namely the expanding window IDNC (EW-IDNC) algorithm and the non-overlapping window IDNC (NOW-IDNC) algorithm. These algorithms provide a high level of protection to the most important video layer, namely the base layer, before considering additional video layers, namely the enhancement layers, in coding decisions. Moreover, in these algorithms, we select an appropriate packet combination over a given number of video layers so that these video layers are decoded by the maximum number of receivers before the deadline. We formulate this packet selection problem as a two-stage maximal clique selection problem over an IDNC graph. Simulation results over a real scalable video sequence show that our proposed EW-IDNC and NOW-IDNC algorithms improve the received video quality compared to the existing IDNC algorithms.
Cataldi, Pasquale; Grangetto, Marco; Tillo, Tammam; Magli, Enrico; Olmo, Gabriella
2010-06-01
Digital fountain codes have emerged as a low-complexity alternative to Reed-Solomon codes for erasure correction. The applications of these codes are relevant especially in the field of wireless video, where low encoding and decoding complexity is crucial. In this paper, we introduce a new class of digital fountain codes based on a sliding-window approach applied to Raptor codes. These codes have several properties useful for video applications, and provide better performance than classical digital fountains. Then, we propose an application of sliding-window Raptor codes to wireless video broadcasting using scalable video coding. The rates of the base and enhancement layers, as well as the number of coded packets generated for each layer, are optimized so as to yield the best possible expected quality at the receiver side, and providing unequal loss protection to the different layers according to their importance. The proposed system has been validated in a UMTS broadcast scenario, showing that it improves the end-to-end quality, and is robust towards fluctuations in the packet loss rate. PMID:20215084
A low complexity prioritized bit-plane coding for SNR scalability in MPEG-21 scalable video coding
NASA Astrophysics Data System (ADS)
Peng, Wen-Hsiao; Chiang, Tihao; Hang, Hsueh-Ming
2005-07-01
In this paper, we propose a low complexity prioritized bit-plane coding scheme to improve the rate-distortion performance of cyclical block coding in MPEG-21 scalable video coding. Specifically, we use a block priority assignment algorithm to firstly transmit the symbols and the blocks with potentially better rate-distortion performance. Different blocks are allowed to be coded unequally in a coding cycle. To avoid transmitting priority overhead, the encoder and the decoder refer to the same context to assign priority. Furthermore, to reduce the complexity, the priority assignment is done by a look-up-table and the coding of each block is controlled by a simple threshold comparison mechanism. Experimental results show that our prioritized bit-plane coding scheme can offer up to 0.5dB PSNR improvement over the cyclical block coding described in the joint scalable verification model (JSVM).
Wyner-Ziv video coding based on a new hierarchical block matching algorithm
NASA Astrophysics Data System (ADS)
Liu, Rong Ke; Zhao, Hong Bo; Yue, Zhi
2008-02-01
Distributed video coding (DVC) is a new video coding paradigm that shifts the complexity from the encoder side to the decoder side. One particular case of DVC, the Wyner-Ziv coding scheme, encodes each video frame separately and decodes the video sequence jointly with side information. This paper presents a new Wyner-Ziv video coding scheme based on hierarchical block matching algorithm (HBMA). In this proposed scheme, the side information is greatly refined to assist the reconstruction of the Wyner-Ziv frames. The bidirectional motion estimation and the forward motion estimation are associated to generate the interpolated frame from temporally adjacent key frames to attain the high fidelity side information. During the bidirectional motion estimation, the size of the block and the search area vary at different levels of hierarchy. In additional, the motion vectors are inherited from big blocks to small blocks by choosing the smallest mean-of-the-absolute-difference value among neighboring blocks. Preliminary experiment results show that the proposed scheme can achieve better rate-distortion performance by 0.5-1 dB compared to the existing Wyner-Ziv video coding with the slightly increased decoding complexity.
Hallquist, J.O.
1981-01-01
A user's manual is provided for NIKE3D, a fully implicit three-dimensional finite element code for analyzing the large deformation static and dynamic response of inelastic solids. A contact-impact algorithm permits gaps and sliding along material interfaces. By a specialization of this algorithm, such interfaces can be rigidly tied to admit variable zoning without the need of transition regions. Spatial discretization is achieved by the use of 8-node constant pressure solid elements. Bandwidth minimization is optional. Post-processors for NIKE3D include GRAPE for plotting deformed shapes and stress contours and DYNAP for plotting time histories.
NASA Astrophysics Data System (ADS)
Ongaro, T. E.; Clarke, A.; Neri, A.; Voight, B.; Widiwijayanti, C.
2005-12-01
For the first time the dynamics of directed blasts from explosive lava-dome decompression have been investigated by means of transient, multiphase flow simulations in 2D and 3D. Multiphase flow models developed for the analysis of pyroclastic dispersal from explosive eruptions have been so far limited to 2D axisymmetric or Cartesian formulations which cannot properly account for important 3D features of the volcanic system such as complex morphology and fluid turbulence. Here we use a new parallel multiphase flow code, named PDAC (Pyroclastic Dispersal Analysis Code) (Esposti Ongaro et al., 2005), able to simulate the transient and 3D thermofluid-dynamics of pyroclastic dispersal produced by collapsing columns and volcanic blasts. The code solves the equations of the multiparticle flow model of Neri et al. (2003) on 3D domains extending up to several kilometres in 3D and includes a new description of the boundary conditions over topography which is automatically acquired from a DEM. The initial conditions are represented by a compact volume of gas and pyroclasts, with clasts of different sizes and densities, at high temperature and pressure. Different dome porosities and pressurization models were tested in 2D to assess the sensitivity of the results to the distribution of initial gas pressure, and to the total mass and energy stored in the dome, prior to 3D modeling. The simulations have used topographies appropriate for the 1997 Boxing Day directed blast on Montserrat, which eradicated the village of St. Patricks. Some simulations tested the runout of pyroclastic density currents over the ocean surface, corresponding to observations of over-water surges to several km distances at both locations. The PDAC code was used to perform 3D simulations of the explosive event on the actual volcano topography. The results highlight the strong topographic control on the propagation of the dense pyroclastic flows, the triggering of thermal instabilities, and the elutriation
Real-time transmission of digital video using variable-length coding
NASA Technical Reports Server (NTRS)
Bizon, Thomas P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.
1993-01-01
Huffman coding is a variable-length lossless compression technique where data with a high probability of occurrence is represented with short codewords, while 'not-so-likely' data is assigned longer codewords. Compression is achieved when the high-probability levels occur so frequently that their benefit outweighs any penalty paid when a less likely input occurs. One instance where Huffman coding is extremely effective occurs when data is highly predictable and differential coding can be applied (as with a digital video signal). For that reason, it is desirable to apply this compression technique to digital video transmission; however, special care must be taken in order to implement a communication protocol utilizing Huffman coding. This paper addresses several of the issues relating to the real-time transmission of Huffman-coded digital video over a constant-rate serial channel. Topics discussed include data rate conversion (from variable to a fixed rate), efficient data buffering, channel coding, recovery from communication errors, decoder synchronization, and decoder architectures. A description of the hardware developed to execute Huffman coding and serial transmission is also included. Although this paper focuses on matters relating to Huffman-coded digital video, the techniques discussed can easily be generalized for a variety of applications which require transmission of variable-length data.
VTLOGANL: A Computer Program for Coding and Analyzing Data Gathered on Video Tape.
ERIC Educational Resources Information Center
Hecht, Jeffrey B.; And Others
To code and analyze research data on videotape, a methodology is needed that allows the researcher to code directly and then analyze the observed degree of intensity of the observed events. The establishment of such a methodology is the next logical step in the development of the use of video recorded data in research. The Technological…
Real-time transmission of digital video using variable-length coding
NASA Astrophysics Data System (ADS)
Bizon, Thomas P.; Shalkhauser, Mary Jo; Whyte, Wayne A., Jr.
1993-03-01
Huffman coding is a variable-length lossless compression technique where data with a high probability of occurrence is represented with short codewords, while 'not-so-likely' data is assigned longer codewords. Compression is achieved when the high-probability levels occur so frequently that their benefit outweighs any penalty paid when a less likely input occurs. One instance where Huffman coding is extremely effective occurs when data is highly predictable and differential coding can be applied (as with a digital video signal). For that reason, it is desirable to apply this compression technique to digital video transmission; however, special care must be taken in order to implement a communication protocol utilizing Huffman coding. This paper addresses several of the issues relating to the real-time transmission of Huffman-coded digital video over a constant-rate serial channel. Topics discussed include data rate conversion (from variable to a fixed rate), efficient data buffering, channel coding, recovery from communication errors, decoder synchronization, and decoder architectures. A description of the hardware developed to execute Huffman coding and serial transmission is also included. Although this paper focuses on matters relating to Huffman-coded digital video, the techniques discussed can easily be generalized for a variety of applications which require transmission of variable-length data.
Modeling of tungsten transport in the linear plasma device PSI-2 with the 3D Monte-Carlo code ERO
NASA Astrophysics Data System (ADS)
Marenkov, E.; Eksaeva, A.; Borodin, D.; Kirschner, A.; Laengner, M.; Kurnaev, V.; Kreter, A.; Coenen, J. W.; Rasinski, M.
2015-08-01
The ERO code was modified for modeling of plasma-surface interactions and impurities transport in the PSI-2 installation. Results of experiments on tungsten target irradiation with argon plasma were taken as a benchmark for the new version of the code. Spectroscopy data modeled with the code are in good agreement with experimental ones. Main factors contributing to observed discrepancies are discussed.
NASA Astrophysics Data System (ADS)
Zhang, M. Q.
1989-09-01
A new Monte Carlo algorithm for 3D Kawasaki spin-exchange simulations and its implementation on a CDC CYBER 205 is presented. This approach is applicable to lattices with sizes between 4×4×4 and 256×L2×L3 ((L2+2)(L3+4)/4⩽65535) and periodic boundary conditions. It is adjustable to various kinetic models in which the total magnetization is conserved. Maximum speed on 10 million steps per second can be reached for 3-D Ising model with Metropolis rate.
NASA Astrophysics Data System (ADS)
Honda, M.; Satake, S.; Suzuki, Y.; Yoshida, M.; Hayashi, N.; Kamiya, K.; Matsuyama, A.; Shinohara, K.; Matsunaga, G.; Nakata, M.; Ide, S.; Urano, H.
2015-07-01
The integrated simulation framework for toroidal momentum transport is developed, which self-consistently calculates the neoclassical toroidal viscosity (NTV), the radial electric field {{E}r} and the resultant toroidal rotation {{V}φ} together with the scrape-off-layer (SOL) physics-based boundary model. The coupling of three codes, the 1.5D transport code TOPICS, the three-dimensional (3D) equilibrium code VMEC and the 3D δ f drift-kinetic equation solver FORTEC-3D, makes it possible to calculate the NTV due to the non-axisymmetric perturbed magnetic field caused by toroidal field coils. Analyses reveal that the NTV significantly influences {{V}φ} in JT-60U and {{E}r} holds the key to determine the NTV profile. The sensitivity of the {{V}φ} profile to the boundary rotation necessitates a boundary condition modelling for toroidal momentum. Owing to the high-resolution measurement system in JT-60U, the {{E}r} gradient is found to be virtually zero at the separatrix regardless of toroidal rotation velocities. Focusing on {{E}r} , the boundary model of toroidal momentum is developed in conjunction with the SOL/divertor plasma code D5PM. This modelling realizes self-consistent predictive simulations for operation scenario development in ITER.
Spatial resampling of IDR frames for low bitrate video coding with HEVC
NASA Astrophysics Data System (ADS)
Hosking, Brett; Agrafiotis, Dimitris; Bull, David; Easton, Nick
2015-03-01
As the demand for higher quality and higher resolution video increases, many applications fail to meet this demand due to low bandwidth restrictions. One factor contributing to this problem is the high bitrate requirement of the intra-coded Instantaneous Decoding Refresh (IDR) frames featuring in all video coding standards. Frequent coding of IDR frames is essential for error resilience in order to prevent the occurrence of error propagation. However, as each one consumes a huge portion of the available bitrate, the quality of future coded frames is hindered by high levels of compression. This work presents a new technique, known as Spatial Resampling of IDR Frames (SRIF), and shows how it can increase the rate distortion performance by providing a higher and more consistent level of video quality at low bitrates.
Cullen, D E
1998-11-22
TART98 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART98 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART98 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART98 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART98 and its data files.
Bischof, C.H.; Mauer, A.; Jones, W.T.
1995-12-31
Automatic differentiation (AD) is a methodology for developing reliable sensitivity-enhanced versions of arbitrary computer programs with little human effort. It can vastly accelerate the use of advanced simulation codes in multidisciplinary design optimization, since the time for generating and verifying derivative codes is greatly reduced. In this paper, we report on the application of the recently developed ADIC automatic differentiation tool for ANSI C programs to the CSCMDO multiblock three-dimensional volume grid generator. The ADIC-generated code can easily be interfaced with Fortran derivative codes generated with the ADIFOR AD tool FORTRAN 77 programs, thus providing efficient sensitivity-enhancement techniques for multilanguage, multidiscipline problems.
NASA Astrophysics Data System (ADS)
Atzeni, Stefano; Marocchino, Alberto; Schiavi, Angelo
2016-03-01
Accurate descriptions of laser power coupling to the plasma and electron energy transport are crucial for designing shock-ignition targets and assessing their robustness (in particular with regard to laser and positioning errors). To this purpose, the 2D DUED laser fusion code has been improved with the inclusion of a 3D laser ray-tracing scheme and a model for non-local electron transport. 2D simulations with the upgraded code are presented; the dependence of the fusion yield vs target displacement is studied. Two different irradiation configurations are considered.
S. Ethier; Z. Lin
2003-09-15
Several years of optimization on the super-scalar architecture has made it more difficult to port the current version of the 3D particle-in-cell code GTC to the CRAY/NEC SX-6 vector architecture. This paper explains the initial work that has been done to port this code to the SX-6 computer and to optimize the most time consuming parts. Early performance results are shown and compared to the same test done on the IBM SP Power 3 and Power 4 machines.
NASA Astrophysics Data System (ADS)
Yoshida, Hiroyuki; Misawa, Takeharu; Takase, Kazuyuki
Two-fluid model can simulate two-phase flow by computational cost less than detailed two-phase flow simulation method such as interface tracking method or particle interaction method. Therefore, two-fluid model is useful for thermal hydraulic analysis in large-scale domain such as a rod bundle. Japan Atomic Energy Agency (JAEA) develops three dimensional two-fluid model analysis code ACE-3D that adopts boundary fitted coordinate system in order to simulate complex shape flow channel. In this paper, boiling two-phase flow analysis in a tight-lattice rod bundle was performed by ACE-3D code. The parallel computation using 126 CPUs was applied to this analysis. In the results, the void fraction, which distributes in outermost region of rod bundle, is lower than that in center region of rod bundle. The tendency of void fraction distribution agreed with the measurement results by neutron radiography qualitatively. To evaluate effects of two-phase flow model used in ACE-3D code, numerical simulation of boiling two-phase in tight-lattice rod bundle with no lift force model was also performed. From the comparison of calculated results, it was concluded that the effects of lift force model were not so large for overall void fraction distribution of tight-lattice rod bundle. However, the lift force model is important for local void fraction distribution of fuel bundles.
McBride, Cory L. (Elemental Technologies, American Fort, UT); Yarberry, Victor R.; Schmidt, Rodney Cannon; Meyers, Ray J.
2006-11-01
This report describes the SummitView 1.0 computer code developed at Sandia National Laboratories. SummitView is designed to generate a 3D solid model, amenable to visualization and meshing, that represents the end state of a microsystem fabrication process such as the SUMMiT (Sandia Ultra-Planar Multilevel MEMS Technology) V process. Functionally, SummitView performs essentially the same computational task as an earlier code called the 3D Geometry modeler [1]. However, because SummitView is based on 2D instead of 3D data structures and operations, it has significant speed and robustness advantages. As input it requires a definition of both the process itself and the collection of individual 2D masks created by the designer and associated with each of the process steps. The definition of the process is contained in a special process definition file [2] and the 2D masks are contained in MEM format files [3]. The code is written in C++ and consists of a set of classes and routines. The classes represent the geometric data and the SUMMiT V process steps. Classes are provided for the following process steps: Planar Deposition, Planar Etch, Conformal Deposition, Dry Etch, Wet Etch and Release Etch. SummitView is built upon the 2D Boolean library GBL-2D [4], and thus contains all of that library's functionality.
NASA Astrophysics Data System (ADS)
Reiman, A.; Ferraro, N. M.; Turnbull, A.; Park, J. K.; Cerfon, A.; Evans, T. E.; Lanctot, M. J.; Lazarus, E. A.; Liu, Y.; McFadden, G.; Monticello, D.; Suzuki, Y.
2015-06-01
In comparing equilibrium solutions for a DIII-D shot that is amenable to analysis by both stellarator and tokamak three-dimensional (3D) equilibrium codes, a significant disagreement has been seen between solutions of the VMEC stellarator equilibrium code and solutions of tokamak perturbative 3D equilibrium codes. The source of that disagreement has been investigated, and that investigation has led to new insights into the domain of validity of the different equilibrium calculations, and to a finding that the manner in which localized screening currents at low order rational surfaces are handled can affect global properties of the equilibrium solution. The perturbative treatment has been found to break down at surprisingly small perturbation amplitudes due to overlap of the calculated perturbed flux surfaces, and that treatment is not valid in the pedestal region of the DIII-D shot studied. The perturbative treatment is valid, however, further into the interior of the plasma, and flux surface overlap does not account for the disagreement investigated here. Calculated equilibrium solutions for simple model cases and comparison of the 3D equilibrium solutions with those of other codes indicate that the disagreement arises from a difference in handling of localized currents at low order rational surfaces, with such currents being absent in VMEC and present in the perturbative codes. The significant differences in the global equilibrium solutions associated with the presence or absence of very localized screening currents at rational surfaces suggests that it may be possible to extract information about localized currents from appropriate measurements of global equilibrium plasma properties. That would require improved diagnostic capability on the high field side of the tokamak plasma, a region difficult to access with diagnostics.
NASA Astrophysics Data System (ADS)
Li, Li; Hu, Xiao; Zeng, Rui
2007-11-01
The development of practical distributed video coding schemes is based on the consequence of information-theoretic bounds established in the 1970s by Slepian and Wolf for distributed lossless coding, and by Wyner and Ziv for lossy coding with decoder side information. In distributed video compression application, it is hard to accurately describe the non-stationary behavior of the virtual correlation channel between X and side information Y although it plays a very important role in overall system performance. In this paper, we implement a practical Slepian-Wolf asymmetric distributed video compression system using irregular LDPC codes. Moreover, based on exploiting the dependencies of previously decode bit planes from video frame X and side information Y, we present improvement schemes to divide different reliable regions. Our simulation results show improving schemes of exploiting the dependencies between previously decoded bit planes can get better overall encoding rate performance as BER approach zero. We also show, compared with BSC model, BC channel model is more suitable for distributed video compression scenario because of the non-stationary properties of the virtual correlation channel and adaptive detecting channel model parameters from previously adjacent decoded bit planes can provide more accurately initial belief messages from channel at LDPC decoder.
Solwnd: A 3D Compressible MHD Code for Solar Wind Studies. Version 1.0: Cartesian Coordinates
NASA Technical Reports Server (NTRS)
Deane, Anil E.
1996-01-01
Solwnd 1.0 is a three-dimensional compressible MHD code written in Fortran for studying the solar wind. Time-dependent boundary conditions are available. The computational algorithm is based on Flux Corrected Transport and the code is based on the existing code of Zalesak and Spicer. The flow considered is that of shear flow with incoming flow that perturbs this base flow. Several test cases corresponding to pressure balanced magnetic structures with velocity shear flow and various inflows including Alfven waves are presented. Version 1.0 of solwnd considers a rectangular Cartesian geometry. Future versions of solwnd will consider a spherical geometry. Some discussions of this issue is presented.
Error-resilient video coding performance analysis of motion JPEG2000 and MPEG-4
NASA Astrophysics Data System (ADS)
Dufaux, Frederic; Ebrahimi, Touradj
2004-01-01
The new Motion JPEG 2000 standard is providing with some compelling features. It is based on an intra-frame wavelet coding, which makes it very well suited for wireless applications. Indeed, the state-of-the-art wavelet coding scheme achieves very high coding efficiency. In addition, Motion JPEG 2000 is very resilient to transmission errors as frames are coded independently (intra coding). Furthermore, it requires low complexity and introduces minimal coding delay. Finally, it supports very efficient scalability. In this paper, we analyze the performance of Motion JPEG 2000 in error-prone transmission. We compare it to the well-known MPEG-4 video coding scheme, in terms of coding efficiency, error resilience and complexity. We present experimental results which show that Motion JPEG 2000 outperforms MPEG-4 in the presence of transmission errors.
Analysis of the beam halo in negative ion sources by using 3D3V PIC code
NASA Astrophysics Data System (ADS)
Miyamoto, K.; Nishioka, S.; Goto, I.; Hatayama, A.; Hanada, M.; Kojima, A.; Hiratsuka, J.
2016-02-01
The physical mechanism of the formation of the negative ion beam halo and the heat loads of the multi-stage acceleration grids are investigated with the 3D PIC (particle in cell) simulation. The following physical mechanism of the beam halo formation is verified: The beam core and the halo consist of the negative ions extracted from the center and the periphery of the meniscus, respectively. This difference of negative ion extraction location results in a geometrical aberration. Furthermore, it is shown that the heat loads on the first acceleration grid and the second acceleration grid are quantitatively improved compared with those for the 2D PIC simulation result.
Analysis of the beam halo in negative ion sources by using 3D3V PIC code.
Miyamoto, K; Nishioka, S; Goto, I; Hatayama, A; Hanada, M; Kojima, A; Hiratsuka, J
2016-02-01
The physical mechanism of the formation of the negative ion beam halo and the heat loads of the multi-stage acceleration grids are investigated with the 3D PIC (particle in cell) simulation. The following physical mechanism of the beam halo formation is verified: The beam core and the halo consist of the negative ions extracted from the center and the periphery of the meniscus, respectively. This difference of negative ion extraction location results in a geometrical aberration. Furthermore, it is shown that the heat loads on the first acceleration grid and the second acceleration grid are quantitatively improved compared with those for the 2D PIC simulation result. PMID:26932006
NASA Astrophysics Data System (ADS)
Liang, Liang; Martin, Caitlin; Wang, Qian; Sun, Wei; Duncan, James
2016-03-01
Aortic valve (AV) disease is a significant cause of morbidity and mortality. The preferred treatment modality for severe AV disease is surgical resection and replacement of the native valve with either a mechanical or tissue prosthetic. In order to develop effective and long-lasting treatment methods, computational analyses, e.g., structural finite element (FE) and computational fluid dynamic simulations, are very effective for studying valve biomechanics. These computational analyses are based on mesh models of the aortic valve, which are usually constructed from 3D CT images though many hours of manual annotation, and therefore an automatic valve shape reconstruction method is desired. In this paper, we present a method for estimating the aortic valve shape from 3D cardiac CT images, which is represented by triangle meshes. We propose a pipeline for aortic valve shape estimation which includes novel algorithms for building local shape dictionaries and for building landmark detectors and curve detectors using local shape dictionaries. The method is evaluated on real patient image dataset using a leave-one-out approach and achieves an average accuracy of 0.69 mm. The work will facilitate automatic patient-specific computational modeling of the aortic valve.
Puso, M; Maker, B N; Ferencz, R M; Hallquist, J O
2000-03-24
This report provides the NIKE3D user's manual update summary for changes made from version 3.0.0 April 24, 1995 to version 3.3.6 March 24,2000. The updates are excerpted directly from the code printed output file (hence the Courier font and formatting), are presented in chronological order and delineated by NIKE3D version number. NIKE3D is a fully implicit three-dimensional finite element code for analyzing the finite strain static and dynamic response of inelastic solids, shells, and beams. Spatial discretization is achieved by the use of 8-node solid elements, 2-node truss and beam elements, and 4-node membrane and shell elements. Thirty constitutive models are available for representing a wide range of elastic, plastic, viscous, and thermally dependent material behavior. Contact-impact algorithms permit gaps, frictional sliding, and mesh discontinuities along material interfaces. Several nonlinear solution strategies are available, including Full-, Modified-, and Quasi-Newton methods. The resulting system of simultaneous linear equations is either solved iteratively by an element-by-element method, or directly by a direct factorization method.
34/45-Mbps 3D HDTV digital coding scheme using modified motion compensation with disparity vectors
NASA Astrophysics Data System (ADS)
Naito, Sei; Matsumoto, Shuichi
1998-12-01
This paper describes a digital compression coding scheme for transmitting three dimensional stereo HDTV signals with full resolution at bit-rates around 30 to 40 Mbps to be adapted for PDH networks of the CCITT 3rd digital hierarchy, 34 Mbps and 45 Mbps, SDH networks of 52 Mbps and ATM networks. In order to achieve a satisfactory quality for stereo HDTV pictures, three advanced key technologies are introduced into the MPEG-2 Multi-View Profile, i.e., a modified motion compensation using disparity vectors estimated between the left and right pictures, an adaptive rate control using a common buffer memory for left and right pictures encoding, and a discriminatory bit allocation which results in the improvement of left pictures quality without any degradation of right pictures. From the results of coding experiment conducted to evaluate the coding picture achieved by this coding scheme, it is confirmed that our coding scheme gives satisfactory picture quality even at 34 Mbps including audio and FEC data.
Crandall, K.R.
1987-08-01
TRACE 3-D is an interactive beam-dynamics program that calculates the envelopes of a bunched beam, including linear space-charge forces, through a user-defined transport system. TRACE 3-D provides an immediate graphics display of the envelopes and the phase-space ellipses and allows nine types of beam-matching options. This report describes the beam-dynamics calculations and gives detailed instruction for using the code. Several examples are described in detail.
Shapiro, A.B.
1983-08-01
The computer code FACET calculates the radiation geometric view factor (alternatively called shape factor, angle factor, or configuration factor) between surfaces for axisymmetric, two-dimensional planar and three-dimensional geometries with interposed third surface obstructions. FACET was developed to calculate view factors for input to finite-element heat-transfer analysis codes. The first section of this report is a brief review of previous radiation-view-factor computer codes. The second section presents the defining integral equation for the geometric view factor between two surfaces and the assumptions made in its derivation. Also in this section are the numerical algorithms used to integrate this equation for the various geometries. The third section presents the algorithms used to detect self-shadowing and third-surface shadowing between the two surfaces for which a view factor is being calculated. The fourth section provides a user's input guide followed by several example problems.
Just noticeable disparity error-based depth coding for three-dimensional video
NASA Astrophysics Data System (ADS)
Luo, Lei; Tian, Xiang; Chen, Yaowu
2014-07-01
A just noticeable disparity error (JNDE) measurement to describe the maximum tolerated error of depth maps is proposed. Any error of depth value inside the JNDE range would not cause a noticeable distortion observed by human eyes. The JNDE values are used to preprocess the original depth map in the prediction process during the depth coding and to adjust the prediction residues for further improvement of the coding quality. The proposed scheme can be incorporated in any standardized video coding algorithm based on prediction and transform. The experimental results show that the proposed method can achieve a 34% bit rate saving for depth video coding. Moreover, the perceptual quality of the synthesized view is also improved by the proposed method.
NASA Astrophysics Data System (ADS)
Kaprykowsky, Hagen; Doshkov, Dimitar; Hoffmann, Christoph; Ndjiki-Nya, Patrick; Wiegand, Thomas
2011-09-01
Recent investigations have shown that one of the most beneficial elements for higher compression performance in highresolution video is the incorporation of larger block structures. In this work, we will address the question of how to incorporate perceptual aspects into new video coding schemes based on large block structures. This is rooted in the fact that especially high frequency regions such as textures yield high coding costs when using classical prediction modes as well as encoder control based on the mean squared error. To overcome this problem, we will investigate the incorporation of novel intra predictors based on image completion methods. Furthermore, the integration of a perceptualbased encoder control using the well-known structural similarity index will be analyzed. A major aspect of this article is the evaluation of the coding results in a quantitative (i.e. statistical analysis of changes in mode decisions) as well as qualitative (i.e. coding efficiency) manner.
Investigating the structure preserving encryption of high efficiency video coding (HEVC)
NASA Astrophysics Data System (ADS)
Shahid, Zafar; Puech, William
2013-02-01
This paper presents a novel method for the real-time protection of new emerging High Efficiency Video Coding (HEVC) standard. Structure preserving selective encryption is being performed in CABAC entropy coding module of HEVC, which is significantly different from CABAC entropy coding of H.264/AVC. In CABAC of HEVC, exponential Golomb coding is replaced by truncated Rice (TR) up to a specific value for binarization of transform coefficients. Selective encryption is performed using AES cipher in cipher feedback mode on a plaintext of binstrings in a context aware manner. The encrypted bitstream has exactly the same bit-rate and is format complaint. Experimental evaluation and security analysis of the proposed algorithm is performed on several benchmark video sequences containing different combinations of motion, texture and objects.
Development of a 3D FEL code for the simulation of a high-gain harmonic generation experiment.
Biedron, S. G.
1999-02-26
Over the last few years, there has been a growing interest in self-amplified spontaneous emission (SASE) free-electron lasers (FELs) as a means for achieving a fourth-generation light source. In order to correctly and easily simulate the many configurations that have been suggested, such as multi-segmented wigglers and the method of high-gain harmonic generation, we have developed a robust three-dimensional code. The specifics of the code, the comparison to the linear theory as well as future plans will be presented.
Experimental design and analysis of JND test on coded image/video
NASA Astrophysics Data System (ADS)
Lin, Joe Yuchieh; Jin, Lina; Hu, Sudeng; Katsavounidis, Ioannis; Li, Zhi; Aaron, Anne; Kuo, C.-C. Jay
2015-09-01
The visual Just-Noticeable-Difference (JND) metric is characterized by the detectable minimum amount of two visual stimuli. Conducting the subjective JND test is a labor-intensive task. In this work, we present a novel interactive method in performing the visual JND test on compressed image/video. JND has been used to enhance perceptual visual quality in the context of image/video compression. Given a set of coding parameters, a JND test is designed to determine the distinguishable quality level against a reference image/video, which is called the anchor. The JND metric can be used to save coding bitrates by exploiting the special characteristics of the human visual system. The proposed JND test is conducted using a binary-forced choice, which is often adopted to discriminate the difference in perception in a psychophysical experiment. The assessors are asked to compare coded image/video pairs and determine whether they are of the same quality or not. A bisection procedure is designed to find the JND locations so as to reduce the required number of comparisons over a wide range of bitrates. We will demonstrate the efficiency of the proposed JND test, report experimental results on the image and video JND tests.
Motion estimation optimization tools for the emerging high efficiency video coding (HEVC)
NASA Astrophysics Data System (ADS)
Abdelazim, Abdelrahman; Masri, Wassim; Noaman, Bassam
2014-02-01
Recent development in hardware and software allowed a new generation of video quality. However, the development in networking and digital communication is lagging behind. This prompted the establishment of the Joint Collaborative Team on Video Coding (JCT-VC), with an objective to develop a new high-performance video coding standard. A primary reason for developing the HEVC was to enable efficient processing and transmission for HD videos that normally contain large smooth areas; therefore, the HEVC utilizes larger encoding blocks than the previous standard to enable more effective encoding, while smaller blocks are still exploited to encode fast/complex areas of video more efficiently. Hence, the implementation of the encoder investigates all the possible block sizes. This and many added features on the new standard have led to significant increase in the complexity of the encoding process. Furthermore, there is not an automated process to decide on when large blocks or small blocks should be exploited. To overcome this problem, this research proposes a set of optimization tools to reduce the encoding complexity while maintaining the same quality and compression rate. The method automates this process through a set of hierarchical steps yet using the standard refined coding tools.
Modeling the physical structure of star-forming regions with LIME, a 3D radiative transfer code
NASA Astrophysics Data System (ADS)
Quénard, D.; Bottinelli, S.; Caux, E.
2016-05-01
The ability to predict line emission is crucial in order to make a comparison with observations. From LTE to full radiative transfer codes, the goal is always to derive the most accurately possible the physical properties of the source. Non-LTE calculations can be very time consuming but are needed in most of the cases since many studied regions are far from LTE.
Compact all-CMOS spatiotemporal compressive sensing video camera with pixel-wise coded exposure.
Zhang, Jie; Xiong, Tao; Tran, Trac; Chin, Sang; Etienne-Cummings, Ralph
2016-04-18
We present a low power all-CMOS implementation of temporal compressive sensing with pixel-wise coded exposure. This image sensor can increase video pixel resolution and frame rate simultaneously while reducing data readout speed. Compared to previous architectures, this system modulates pixel exposure at the individual photo-diode electronically without external optical components. Thus, the system provides reduction in size and power compare to previous optics based implementations. The prototype image sensor (127 × 90 pixels) can reconstruct 100 fps videos from coded images sampled at 5 fps. With 20× reduction in readout speed, our CMOS image sensor only consumes 14μW to provide 100 fps videos. PMID:27137331
Region-of-interest based rate control for UAV video coding
NASA Astrophysics Data System (ADS)
Zhao, Chun-lei; Dai, Ming; Xiong, Jing-ying
2016-05-01
To meet the requirement of high-quality transmission of videos captured by unmanned aerial vehicles (UAV) with low bandwidth, a novel rate control (RC) scheme based on region-of-interest (ROI) is proposed. First, the ROI information is sent to the encoder with the latest high efficient video coding (HEVC) standard to generate an ROI map. Then, by using the ROI map, bit allocation methods are developed at frame level and large coding unit (LCU) level, to avoid inaccurate bit allocation produced by camera movement. At last, by using a better robustness R- λ model, the quantization parameter ( QP) for each LCU is calculated. The experimental results show that the proposed RC method can get a lower bitrate error and a higher quality for reconstructed video by choosing appropriate pixel weight on the HEVC platform.
2013-10-30
This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.
None
2014-02-26
This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.
Self-images in the video monitor coded by monkey intraparietal neurons.
Iriki, A; Tanaka, M; Obayashi, S; Iwamura, Y
2001-06-01
When playing a video game, or using a teleoperator system, we feel our self-image projected into the video monitor as a part of or an extension of ourselves. Here we show that such a self image is coded by bimodal (somatosensory and visual) neurons in the monkey intraparietal cortex, which have visual receptive fields (RFs) encompassing their somatosensory RFs. We earlier showed these neurons to code the schema of the hand which can be altered in accordance with psychological modification of the body image; that is, when the monkey used a rake as a tool to extend its reach, the visual RFs of these neurons elongated along the axis of the tool, as if the monkey's self image extended to the end of the tool. In the present experiment, we trained monkeys to recognize their image in a video monitor (despite the earlier general belief that monkeys are not capable of doing so), and demonstrated that the visual RF of these bimodal neurons was now projected onto the video screen so as to code the image of the hand as an extension of the self. Further, the coding of the imaged hand could intentionally be altered to match the image artificially modified in the monitor. PMID:11377755
NASA Astrophysics Data System (ADS)
Harvey, R. W. (Bob); Petrov, Yu. V.; Jaeger, E. F.; Berry, L. A.; Bonoli, P. T.; Bader, A.
2015-11-01
A time-dependent simulation of C-Mod pulsed ICRF power is made calculating minority hydrogen ion distribution functions with the CQL3D-Hybrid-FOW finite-orbit-width Fokker-Planck code. ICRF fields are calculated with the AORSA full wave code, and RF diffusion coefficients are obtained from these fields using the DC Lorentz gyro-orbit code. Prior results with a zero-banana-width simulation using the CQL3D/AORSA/DC time-cycles showed a pronounced enhancement of the H distribution in the perpendicular velocity direction compared to results obtained from Stix's quasilinear theory, in general agreement with experiment. The present study compares the new FOW results, including relevant gyro-radius effects, to determine the importance of these effects on the the NPA synthetic diagnostic time-dependence. The new NPA results give increased agreement with experiment, particularly in the ramp-down time after the ICRF pulse. Funded, through subcontract with Massachusetts Institute of Technology, by USDOE sponsored SciDAC Center for Simulation of Wave-Plasma Interactions.
NASA Astrophysics Data System (ADS)
Bates, Jason; Schmitt, Andrew; Zalesak, Steve
2015-11-01
The ablative Rayleigh-Taylor (RT) instability is a key factor in the performance of directly-drive inertial-confinement-fusion (ICF) targets. Although this subject has been studied for quite some time, the accurate simulation of the ablative RT instability has proven to be a challenging task for many radiation hydrodynamics codes, particularly when it comes to capturing the ablatively-stabilized region of the linear dispersion spectrum and modeling ab initio perturbations. In this poster, we present results from recent two-dimensional numerical simulations of the ablative RT instability that were performed using the Eulerian code FastRad3D at the U.S. Naval Research Laboratory. We consider both planar and spherical geometries, low and moderate-Z target materials, different laser wavelengths and where possible, compare our findings with experiment data, linearized theory and/or results from other radiation hydrodynamics codes. Overall, we find that FastRad3D is capable of simulating the ablative RT instability quite accurately, although some uncertainties/discrepancies persist. We discuss these issues, as well as some of the numerical challenges associated with modeling this class of problems. Work supported by U.S. DOE/NNSA.
Bekar, Kursat B; Azmy, Yousry
2009-01-01
Improved TORT solutions to the 3D transport codes' suite of benchmarks exercise are presented in this study. Preliminary TORT solutions to this benchmark indicate that the majority of benchmark quantities for most benchmark cases are computed with good accuracy, and that accuracy improves with model refinement. However, TORT fails to compute accurate results for some benchmark cases with aspect ratios drastically different from 1, possibly due to ray effects. In this work, we employ the standard approach of splitting the solution to the transport equation into an uncollided flux and a fully collided flux via the code sequence GRTUNCL3D and TORT to mitigate ray effects. The results of this code sequence presented in this paper show that the accuracy of most benchmark cases improved substantially. Furthermore, the iterative convergence problems reported for the preliminary TORT solutions have been resolved by bringing the computational cells' aspect ratio closer to unity and, more importantly, by using 64-bit arithmetic precision in the calculation sequence. Results of this study are also reported.
NASA Astrophysics Data System (ADS)
Class, G.
1987-07-01
A program to simulate gas motion and shine through of thermal radiation in fusion reactor vacuum flow channels was developed. The inner surface of the flow channel is described by plane areas (triangles, parallelograms) and by surfaces of revolution. By introducing control planes in the flow path, a variance reduction and shortening of the computation, respectively, are achieved through particle splitting and Russian roulette. The code is written in PL/I and verified using published data. Computer aided input of model data is performed interactively either under IBM-TSO or at a microprocessor (IBM PC-AT). The data files are exchangeable between the IBM-mainframe and IBM-PC computers. Both computers can produce plots of the elaborated channel model. For testing, the simulating computation can likewise be run interactively, whereas the production computation can be issued batchwise. The results of code verification are explained, and examples of channel models and of the interactive mode are given.
Initial Self-Consistent 3D Electron-Cloud Simulations of the LHC Beam with the Code WARP+POSINST
Vay, J; Furman, M A; Cohen, R H; Friedman, A; Grote, D P
2005-10-11
We present initial results for the self-consistent beam-cloud dynamics simulations for a sample LHC beam, using a newly developed set of modeling capability based on a merge [1] of the three-dimensional parallel Particle-In-Cell (PIC) accelerator code WARP [2] and the electron-cloud code POSINST [3]. Although the storage ring model we use as a test bed to contain the beam is much simpler and shorter than the LHC, its lattice elements are realistically modeled, as is the beam and the electron cloud dynamics. The simulated mechanisms for generation and absorption of the electrons at the walls are based on previously validated models available in POSINST [3, 4].
Daavittila, Antti; Haemaelaeinen, Anitta; Kyrki-Rajamaeki, Riitta
2003-05-15
All of the three exercises of the Organization for Economic Cooperation and Development/Nuclear Regulatory Commission pressurized water reactor main steam line break (PWR MSLB) benchmark were calculated at VTT, the Technical Research Centre of Finland. For the first exercise, the plant simulation with point-kinetic neutronics, the thermal-hydraulics code SMABRE was used. The second exercise was calculated with the three-dimensional reactor dynamics code TRAB-3D, and the third exercise with the combination TRAB-3D/SMABRE. VTT has over ten years' experience of coupling neutronic and thermal-hydraulic codes, but this benchmark was the first time these two codes, both developed at VTT, were coupled together. The coupled code system is fast and efficient; the total computation time of the 100-s transient in the third exercise was 16 min on a modern UNIX workstation. The results of all the exercises are similar to those of the other participants. In order to demonstrate the effect of secondary circuit modeling on the results, three different cases were calculated. In case 1 there is no phase separation in the steam lines and no flow reversal in the aspirator. In case 2 the flow reversal in the aspirator is allowed, but there is no phase separation in the steam lines. Finally, in case 3 the drift-flux model is used for the phase separation in the steam lines, but the aspirator flow reversal is not allowed. With these two modeling variations, it is possible to cover a remarkably broad range of results. The maximum power level reached after the reactor trip varies from 534 to 904 MW, the range of the time of the power maximum being close to 30 s. Compared to the total calculated transient time of 100 s, the effect of the secondary side modeling is extremely important.
NASA Astrophysics Data System (ADS)
Darazi, R.; Gouze, A.; Macq, B.
2009-01-01
Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.
TRAC code assessment using data from SCTF Core-III, a large-scale 2D/3D facility
Boyack, B.E.; Shire, P.R.; Harmony, S.C.; Rhee, G.
1988-01-01
Nine tests from the SCTF Core-III configuration have been analyzed using TRAC-PF1/MOD1. The objectives of these assessment activities were to obtain a better understanding of the phenomena occurring during the refill and reflood phases of a large-break loss-of-coolant accident, to determine the accuracy to which key parameters are calculated, and to identify deficiencies in key code correlations and models that provide closure for the differential equations defining thermal-hydraulic phenomena in pressurized water reactors. Overall, the agreement between calculated and measured values of peak cladding temperature is reasonable. In addition, TRAC adequately predicts many of the trends observed in both the integral effect and separate effect tests conducted in SCTF Core-III. The importance of assessment activities that consider potential contributors to discrepancies between the measured and calculated results arising from three sources are described as those related to (1) knowledge about the facility configuration and operation, (2) facility modeling for code input, and (3) deficiencies in code correlations and models. An example is provided. 8 refs., 7 figs., 2 tabs.
Implementation and validation of a Reynolds stress model in the COMMIX-1C/RSM and CAPS-3D/RSM codes
Chang, F.C.; Bottoni, M.
1995-08-01
A Reynolds stress model (RSM) of turbulence, based on seven transport equations, has been linked to the COMMIX-1C/RSM and CAPS-3D/RSM computer codes. Six of the equations model the transport of the components of the Reynolds stress tensor and the seventh models the dissipation of turbulent kinetic energy. When a fluid is heated, four additional transport equations are used: three for the turbulent heat fluxes and one for the variance of temperature fluctuations. All of the analytical and numerical details of the implementation of the new turbulence model are documented. The model was verified by simulation of homogeneous turbulence.
Joint source-channel coding for wireless object-based video communications utilizing data hiding.
Wang, Haohong; Tsaftaris, Sotirios A; Katsaggelos, Aggelos K
2006-08-01
In recent years, joint source-channel coding for multimedia communications has gained increased popularity. However, very limited work has been conducted to address the problem of joint source-channel coding for object-based video. In this paper, we propose a data hiding scheme that improves the error resilience of object-based video by adaptively embedding the shape and motion information into the texture data. Within a rate-distortion theoretical framework, the source coding, channel coding, data embedding, and decoder error concealment are jointly optimized based on knowledge of the transmission channel conditions. Our goal is to achieve the best video quality as expressed by the minimum total expected distortion. The optimization problem is solved using Lagrangian relaxation and dynamic programming. The performance of the proposed scheme is tested using simulations of a Rayleigh-fading wireless channel, and the algorithm is implemented based on the MPEG-4 verification model. Experimental results indicate that the proposed hybrid source-channel coding scheme significantly outperforms methods without data hiding or unequal error protection. PMID:16900673
Parallel Processing of Distributed Video Coding to Reduce Decoding Time
NASA Astrophysics Data System (ADS)
Tonomura, Yoshihide; Nakachi, Takayuki; Fujii, Tatsuya; Kiya, Hitoshi
This paper proposes a parallelized DVC framework that treats each bitplane independently to reduce the decoding time. Unfortunately, simple parallelization generates inaccurate bit probabilities because additional side information is not available for the decoding of subsequent bitplanes, which degrades encoding efficiency. Our solution is an effective estimation method that can calculate the bit probability as accurately as possible by index assignment without recourse to side information. Moreover, we improve the coding performance of Rate-Adaptive LDPC (RA-LDPC), which is used in the parallelized DVC framework. This proposal selects a fitting sparse matrix for each bitplane according to the syndrome rate estimation results at the encoder side. Simulations show that our parallelization method reduces the decoding time by up to 35[%] and achieves a bit rate reduction of about 10[%].
NASA Technical Reports Server (NTRS)
Walitt, L.
1982-01-01
The VANS successive approximation numerical method was extended to the computation of three dimensional, viscous, transonic flows in turbomachines. A cross-sectional computer code, which conserves mass flux at each point of the cross-sectional surface of computation was developed. In the VANS numerical method, the cross-sectional computation follows a blade-to-blade calculation. Numerical calculations were made for an axial annular turbine cascade and a transonic, centrifugal impeller with splitter vanes. The subsonic turbine cascade computation was generated in blade-to-blade surface to evaluate the accuracy of the blade-to-blade mode of marching. Calculated blade pressures at the hub, mid, and tip radii of the cascade agreed with corresponding measurements. The transonic impeller computation was conducted to test the newly developed locally mass flux conservative cross-sectional computer code. Both blade-to-blade and cross sectional modes of calculation were implemented for this problem. A triplet point shock structure was computed in the inducer region of the impeller. In addition, time-averaged shroud static pressures generally agreed with measured shroud pressures. It is concluded that the blade-to-blade computation produces a useful engineering flow field in regions of subsonic relative flow; and cross-sectional computation, with a locally mass flux conservative continuity equation, is required to compute the shock waves in regions of supersonic relative flow.
Assessment of a 3-D boundary layer code to predict heat transfer and flow losses in a turbine
NASA Technical Reports Server (NTRS)
Anderson, O. L.
1984-01-01
Zonal concepts are utilized to delineate regions of application of three-dimensional boundary layer (DBL) theory. The zonal approach requires three distinct analyses. A modified version of the 3-DBL code named TABLET is used to analyze the boundary layer flow. This modified code solves the finite difference form of the compressible 3-DBL equations in a nonorthogonal surface coordinate system which includes coriolis forces produced by coordinate rotation. These equations are solved using an efficient, implicit, fully coupled finite difference procedure. The nonorthogonal surface coordinate system is calculated using a general analysis based on the transfinite mapping of Gordon which is valid for any arbitrary surface. Experimental data is used to determine the boundary layer edge conditions. The boundary layer edge conditions are determined by integrating the boundary layer edge equations, which are the Euler equations at the edge of the boundary layer, using the known experimental wall pressure distribution. Starting solutions along the inflow boundaries are estimated by solving the appropriate limiting form of the 3-DBL equations.
Applications of just-noticeable depth difference model in joint multiview video plus depth coding
NASA Astrophysics Data System (ADS)
Liu, Chao; An, Ping; Zuo, Yifan; Zhang, Zhaoyang
2014-10-01
A new multiview just-noticeable-depth-difference(MJNDD) Model is presented and applied to compress the joint multiview video plus depth. Many video coding algorithms remove spatial and temporal redundancies and statistical redundancies but they are not capable of removing the perceptual redundancies. Since the final receptor of video is the human eyes, we can remove the perception redundancy to gain higher compression efficiency according to the properties of human visual system (HVS). Traditional just-noticeable-distortion (JND) model in pixel domain contains luminance contrast and spatial-temporal masking effects, which describes the perception redundancy quantitatively. Whereas HVS is very sensitive to depth information, a new multiview-just-noticeable-depth-difference(MJNDD) model is proposed by combining traditional JND model with just-noticeable-depth-difference (JNDD) model. The texture video is divided into background and foreground areas using depth information. Then different JND threshold values are assigned to these two parts. Later the MJNDD model is utilized to encode the texture video on JMVC. When encoding the depth video, JNDD model is applied to remove the block artifacts and protect the edges. Then we use VSRS3.5 (View Synthesis Reference Software) to generate the intermediate views. Experimental results show that our model can endure more noise and the compression efficiency is improved by 25.29 percent at average and by 54.06 percent at most compared to JMVC while maintaining the subject quality. Hence it can gain high compress ratio and low bit rate.
Multiview video codec based on KTA techniques
NASA Astrophysics Data System (ADS)
Seo, Jungdong; Kim, Donghyun; Ryu, Seungchul; Sohn, Kwanghoon
2011-03-01
Multi-view video coding (MVC) is a video coding standard developed by MPEG and VCEG for multi-view video. It showed average PSNR gain of 1.5dB compared with view-independent coding by H.264/AVC. However, because resolutions of multi-view video are getting higher for more realistic 3D effect, high performance video codec is needed. MVC adopted hierarchical B-picture structure and inter-view prediction as core techniques. The hierarchical B-picture structure removes the temporal redundancy, and the inter-view prediction reduces the inter-view redundancy by compensated prediction from the reconstructed neighboring views. Nevertheless, MVC has inherent limitation in coding efficiency, because it is based on H.264/AVC. To overcome the limit, an enhanced video codec for multi-view video based on Key Technology Area (KTA) is proposed. KTA is a high efficiency video codec by Video Coding Expert Group (VCEG), and it was carried out for coding efficiency beyond H.264/AVC. The KTA software showed better coding gain than H.264/AVC by using additional coding techniques. The techniques and the inter-view prediction are implemented into the proposed codec, which showed high coding gain compared with the view-independent coding result by KTA. The results presents that the inter-view prediction can achieve higher efficiency in a multi-view video codec based on a high performance video codec such as HEVC.
A video coding scheme based on joint spatiotemporal and adaptive prediction.
Jiang, Wenfei; Latecki, Longin Jan; Liu, Wenyu; Liang, Hui; Gorman, Ken
2009-05-01
We propose a video coding scheme that departs from traditional Motion Estimation/DCT frameworks and instead uses Karhunen-Loeve Transform (KLT)/Joint Spatiotemporal Prediction framework. In particular, a novel approach that performs joint spatial and temporal prediction simultaneously is introduced. It bypasses the complex H.26x interframe techniques and it is less computationally intensive. Because of the advantage of the effective joint prediction and the image-dependent color space transformation (KLT), the proposed approach is demonstrated experimentally to consistently lead to improved video quality, and in many cases to better compression rates and improved computational speed. PMID:19342337
2013-10-01
Earth3D is a computer code designed to allow fast calculation of seismic rays and travel times through a 3D model of the Earth. LLNL is using this for earthquake location and global tomography efforts and such codes are of great interest to the Earth Science community.
The H.264/MPEG-4 AVC video coding standard and its deployment status
NASA Astrophysics Data System (ADS)
Sullivan, Gary J.
2005-07-01
The new video coding standard known as H.264/MPEG-4 Advanced Video Coding (AVC), now in its fourth version, has demonstrated significant achievements in terms of coding efficiency, robustness to a variety of network channels and conditions, and breadth of application. The recent fidelity range extensions have further improved compression quality and further broadened the range of applications, and the recent corrigenda have excised the inevitable errata of the initially-approved versions of the specification. Patent licensing programs have begun, the standard has been adopted into a variety of application specifications, and products suitable for widespread deployment have begun to appear. New work toward the near-term development of scalable video coding (SVC) extensions is also under way. This paper does not attempt to review the details of the H.264/MPEG-4 AVC technical design, as that subject has been covered already in a number of publications. Instead, it covers only the high-level design characteristics and focuses more on the recent developments in the standardization community and the deployment status of the specification.
NASA Technical Reports Server (NTRS)
Hathaway, M. D.; Wood, J. R.; Wasserbauer, C. A.
1991-01-01
A low speed centrifugal compressor facility recently built by the NASA Lewis Research Center is described. The purpose of this facility is to obtain detailed flow field measurements for computational fluid dynamic code assessment and flow physics modeling in support of Army and NASA efforts to advance small gas turbine engine technology. The facility is heavily instrumented with pressure and temperature probes, both in the stationary and rotating frames of reference, and has provisions for flow visualization and laser velocimetry. The facility will accommodate rotational speeds to 2400 rpm and is rated at pressures to 1.25 atm. The initial compressor stage being tested is geometrically and dynamically representative of modern high-performance centrifugal compressor stages with the exception of Mach number levels. Preliminary experimental investigations of inlet and exit flow uniformly and measurement repeatability are presented. These results demonstrate the high quality of the data which may be expected from this facility. The significance of synergism between computational fluid dynamic analysis and experimentation throughout the development of the low speed centrifugal compressor facility is demonstrated.
Standard-Compliant Multiple Description Video Coding over Packet Loss Network
NASA Astrophysics Data System (ADS)
Bai, Huihui; Zhao, Yao; Zhang, Mengmeng
2010-12-01
An effective scheme of multiple description video coding is proposed for transmission over packet loss network. Using priority encoding transmission, we attempt to overcome the limitation of specific scalable video codec and apply FEC-based multiple description to a common video coder, such as the standard H.264. Firstly, multiple descriptions can be generated using temporal downsampling and the frame with high motion changing is duplicated in each description. Then according to different motion characteristics between frames, each description can be divided into several messages, so in each message better temporal correlation can be maintained for better estimation when information losses occur. Based on priority encoding transmission, unequal protections are assigned in each message. Furthermore, the priority is designed in view of packet loss rate of channels and the significance of bit streams. Experimental results validate the effectiveness of the proposed scheme with better performance than the equal protection scheme and other state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Patwary, Nurmohammed; Doblas, Ana; King, Sharon V.; Preza, Chrysanthe
2014-03-01
Imaging thick biological samples introduces spherical aberration (SA) due to refractive index (RI) mismatch between specimen and imaging lens immersion medium. SA increases with the increase of either depth or RI mismatch. Therefore, it is difficult to find a static compensator for SA1. Different wavefront coding methods2,3 have been studied to find an optimal way of static wavefront correction to reduce depth-induced SA. Inspired by a recent design of a radially symmetric squared cubic (SQUBIC) phase mask that was tested for scanning confocal microscopy1 we have modified the pupil using the SQUBIC mask to engineer the point spread function (PSF) of a wide field fluorescence microscope. In this study, simulated images of a thick test object were generated using a wavefront encoded engineered PSF (WFEPSF) and were restored using space-invariant (SI) and depth-variant (DV) expectation maximization (EM) algorithms implemented in the COSMOS software4. Quantitative comparisons between restorations obtained with both the conventional and WFE PSFs are presented. Simulations show that, in the presence of SA, the use of the SIEM algorithm and a single SQUBIC encoded WFE-PSF can yield adequate image restoration. In addition, in the presence of a large amount of SA, it is possible to get adequate results using the DVEM with fewer DV-PSFs than would typically be required for processing images acquired with a clear circular aperture (CCA) PSF. This result implies that modification of a widefield system with the SQUBIC mask renders the system less sensitive to depth-induced SA and suitable for imaging samples at larger optical depths.
Gifford, Kent A; Wareing, Todd A; Failla, Gregory; Horton, John L; Eifel, Patricia J; Mourtada, Firas
2010-01-01
A patient dose distribution was calculated by a 3D multi-group S N particle transport code for intracavitary brachytherapy of the cervix uteri and compared to previously published Monte Carlo results. A Cs-137 LDR intracavitary brachytherapy CT data set was chosen from our clinical database. MCNPX version 2.5.c, was used to calculate the dose distribution. A 3D multi-group S N particle transport code, Attila version 6.1.1 was used to simulate the same patient. Each patient applicator was built in SolidWorks, a mechanical design package, and then assembled with a coordinate transformation and rotation for the patient. The SolidWorks exported applicator geometry was imported into Attila for calculation. Dose matrices were overlaid on the patient CT data set. Dose volume histograms and point doses were compared. The MCNPX calculation required 14.8 hours, whereas the Attila calculation required 22.2 minutes on a 1.8 GHz AMD Opteron CPU. Agreement between Attila and MCNPX dose calculations at the ICRU 38 points was within +/- 3%. Calculated doses to the 2 cc and 5 cc volumes of highest dose differed by not more than +/- 1.1% between the two codes. Dose and DVH overlays agreed well qualitatively. Attila can calculate dose accurately and efficiently for this Cs-137 CT-based patient geometry. Our data showed that a three-group cross-section set is adequate for Cs-137 computations. Future work is aimed at implementing an optimized version of Attila for radiotherapy calculations. PMID:20160682
An edge-based temporal error concealment for MPEG-coded video
NASA Astrophysics Data System (ADS)
Huang, Yu-Len; Lien, Hsiu-Yi
2005-07-01
When transmitted over unreliable channels, the compressed video can suffer severe degradation. Some strategies were employed to make an acceptable quality of the decoded image sequence. Error concealment (EC) technique is one of effective approaches to diminish the quality degradation. A number of EC algorithms have been developed to combat the transmission errors for MPEG-coded video. These methods always work well to reconstruct the smooth or regular damaged macroblocks. However, for damaged macroblocks were irregular or high-detail, the reconstruction may follow noticeable blurring consequence or not match well with the surrounding macroblocks. This paper proposes an edgebased temporal EC model to conceal the errors. In the proposed method, both the spatial and the temporal contextual features in compressed video are measured by using an edge detector, i.e. Sobel operator. The edge information surrounding a damaged macroblock is utilized to estimate the lost motion vectors based on the boundary matching technique. Next, the estimated motion vectors are used to reconstruct the damaged macroblock by exploiting the information in reference frames. In comparison with traditional EC algorithms, the proposed method provides a significant improvement on both objective peak signal-to-noise ratio (PSNR) measurement and subjective visual quality of MPEG-coded video.
Mixture block coding with progressive transmission in packet video. Appendix 1: Item 2. M.S. Thesis
NASA Technical Reports Server (NTRS)
Chen, Yun-Chung
1989-01-01
Video transmission will become an important part of future multimedia communication because of dramatically increasing user demand for video, and rapid evolution of coding algorithm and VLSI technology. Video transmission will be part of the broadband-integrated services digital network (B-ISDN). Asynchronous transfer mode (ATM) is a viable candidate for implementation of B-ISDN due to its inherent flexibility, service independency, and high performance. According to the characteristics of ATM, the information has to be coded into discrete cells which travel independently in the packet switching network. A practical realization of an ATM video codec called Mixture Block Coding with Progressive Transmission (MBCPT) is presented. This variable bit rate coding algorithm shows how a constant quality performance can be obtained according to user demand. Interactions between codec and network are emphasized including packetization, service synchronization, flow control, and error recovery. Finally, some simulation results based on MBCPT coding with error recovery are presented.
Video segmentation using spatial proximity, color, and motion information for region-based coding
NASA Astrophysics Data System (ADS)
Hong, Won H.; Kim, Nam Chul; Lee, Sang-Mi
1994-09-01
An efficient video segmentation algorithm with homogeneity measure to incorporate spatial proximity, color, and motion information simultaneously is presented for region-based coding. The procedure toward complete segmentation consists of two steps: primary segmentation, and secondary segmentation. In the primary segmentation, an input image is finely segmented by FSCL. In the secondary segmentation, a lot of small regions and similar regions generated in the preceding step are eliminated or merged by a fast RSST. Through some experiments, it is found that the proposed algorithm produces efficient segmentation results and the video coding system with this algorithm yields visually acceptable quality and PSNR equals 36 - 37 dB at a very low bitrate of about 13.2 kbits/s.
Joint wavelet-based coding and packetization for video transport over packet-switched networks
NASA Astrophysics Data System (ADS)
Lee, Hung-ju
1996-02-01
In recent years, wavelet theory applied to image, and audio and video compression has been extensively studied. However, only gaining compression ratio without considering the underlying networking systems is unrealistic, especially for multimedia applications over networks. In this paper, we present an integrated approach, which attempts to preserve the advantages of wavelet-based image coding scheme and to provide robustness to a certain extent for lost packets over packet-switched networks. Two different packetization schemes, called the intrablock-oriented (IAB) and interblock-oriented (IRB) schemes, in conjunction with wavelet-based coding, are presented. Our approach is evaluated under two different packet loss models with various packet loss probabilities through simulations which are driven by real video sequences.
Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos
NASA Astrophysics Data System (ADS)
Xu, Dawen; Wang, Rangding
2015-05-01
A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.
NASA Astrophysics Data System (ADS)
Kurceren, Ragip; Modestino, James W.
1998-12-01
The use of forward error-control (FEC) coding, possibly in conjunction with ARQ techniques, has emerged as a promising approach for video transport over ATM networks for cell-loss recovery and/or bit error correction, such as might be required for wireless links. Although FEC provides cell-loss recovery capabilities it also introduces transmission overhead which can possibly cause additional cell losses. A methodology is described to maximize the number of video sources multiplexed at a given quality of service (QoS), measured in terms of decoded cell loss probability, using interlaced FEC codes. The transport channel is modelled as a block interference channel (BIC) and the multiplexer as single server, deterministic service, finite buffer supporting N users. Based upon an information-theoretic characterization of the BIC and large deviation bounds on the buffer overflow probability, the described methodology provides theoretically achievable upper limits on the number of sources multiplexed. Performance of specific coding techniques using interlaced nonbinary Reed-Solomon (RS) codes and binary rate-compatible punctured convolutional (RCPC) codes is illustrated.
Dunn, F.E.; Thomas, J.; Liaw, J.; Matos, J.E.
2008-07-15
For safety analyses to support conversion of MNSR reactors from HEU fuel to LEU fuel, a RELAP5-3D model was set up to simulate the entire MNSR system. This model includes the core, the beryllium reflectors, the water in the tank and the water in the surrounding pool. The MCNP code was used to obtain the power distributions in the core and to obtain reactivity feedback coefficients for the transient analyses. The RELAP5-3D model was validated by comparing measured and calculated data for the NIRR-1 reactor in Nigeria. Comparisons include normal operation at constant power and a 3.77 mk rod withdrawal transient. Excellent agreement was obtained for core coolant inlet and outlet temperatures for operation at constant power, and for power level, coolant inlet temperature, and coolant outlet temperature for the rod withdrawal transient. In addition to the negative reactivity feedbacks from increasing core moderator and fuel temperatures, it was necessary to calculate and include positive reactivity feedback from temperature changes in the radial beryllium reflector and changes in the temperature and density of the water in the tank above the core and at the side of the core. The validated RELAP5-3D model was then used to analyze 3.77 mk rod withdrawal transients for LEU cores with two UO{sub 2} fuel pin designs. The impact of cracking of oxide LEU fuel is discussed. In addition, steady-state power operation at elevated power levels was evaluated to determine steady-state safety margins for onset of nucleate boiling and for onset of significant voiding. (author)
NASA Astrophysics Data System (ADS)
Dzhalandinov, A.; Tsofin, V.; Kochkin, V.; Panferov, P.; Timofeev, A.; Reshetnikov, A.; Makhotin, D.; Erak, D.; Voloschenko, A.
2016-02-01
Usually the synthesis of two-dimensional and one-dimensional discrete ordinate calculations is used to evaluate neutron fluence on VVER-1000 reactor pressure vessel (RPV) for prognosis of radiation embrittlement. But there are some cases when this approach is not applicable. For example the latest projects of VVER-1000 have upgraded surveillance program. Containers with surveillance specimens are located on the inner surface of RPV with fast neutron flux maximum. Therefore, the synthesis approach is not suitable enough for calculation of local disturbance of neutron field in RPV inner surface behind the surveillance specimens because of their complicated and heterogeneous structure. In some cases the VVER-1000 core loading consists of fuel assemblies with different fuel height and the applicability of synthesis approach is also ambiguous for these fuel cycles. Also, the synthesis approach is not enough correct for the neutron fluence estimation at the RPV area above core top. Because of these reasons only the 3D neutron transport codes seem to be satisfactory for calculation of neutron fluence on the VVER-1000 RPV. The direct 3D calculations are also recommended by modern regulations.
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.
1994-01-01
A three-dimensional computational fluid dynamics code, RPLUS3D, which was developed for the reactive propulsive flows of ramjets and scramjets, was validated for glancing shock wave-boundary layer interactions. Both laminar and turbulent flows were studied. A supersonic flow over a wedge mounted on a flat plate was numerically simulated. For the laminar case, the static pressure distribution, velocity vectors, and particle traces on the flat plate were obtained. For turbulent flow, both the Baldwin-Lomax and Chien two-equation turbulent models were used. The static pressure distributions, pitot pressure, and yaw angle profiles were computed. In addition, the velocity vectors and particle traces on the flat plate were also obtained from the computed solution. Overall, the computed results for both laminar and turbulent cases compared very well with the experimentally obtained data.
NASA Astrophysics Data System (ADS)
Lampson, Alan I.; Plummer, David N.; Erkkila, John H.; Crowell, Peter G.; Helms, Charles A.
1998-05-01
This paper describes a series of analyses using the 3-d MINT Navier-Stokes and OCELOT wave optics codes to calculate beam quality in a COIL laser cavity. To make this analysis tractable, the problem was broken into two contributions to the medium quality; that associated with microscale disturbances primarily from the transverse iodine injectors, and that associated with the macroscale including boundary layers and shock-like effects. Results for both microscale and macroscale medium quality are presented for the baseline layer operating point in terms of single pass wavefront error. These results show that the microscale optical path difference effects are 1D in nature and of low spatial order. The COIL medium quality is shown to be dominated by macroscale effects; primarily pressure waves generated from flow/boundary layer interactions on the cavity shrouds.
Neighboring block based disparity vector derivation for multiview compatible 3D-AVC
NASA Astrophysics Data System (ADS)
Kang, Jewon; Chen, Ying; Zhang, Li; Zhao, Xin; Karczewicz, Marta
2013-09-01
3D-AVC being developed under Joint Collaborative Team on 3D Video Coding (JCT-3V) significantly outperforms the Multiview Video Coding plus Depth (MVC+D) which simultaneously encodes texture views and depth views with the multiview extension of H.264/AVC (MVC). However, when the 3D-AVC is configured to support multiview compatibility in which texture views are decoded without depth information, the coding performance becomes significantly degraded. The reason is that advanced coding tools incorporated into the 3D-AVC do not perform well due to the lack of a disparity vector converted from the depth information. In this paper, we propose a disparity vector derivation method utilizing only the information of texture views. Motion information of neighboring blocks is used to determine a disparity vector for a macroblock, so that the derived disparity vector is efficiently used for the coding tools in 3D-AVC. The proposed method significantly improves a coding gain of the 3D-AVC in the multiview compatible mode about 20% BD-rate saving in the coded views and 26% BD-rate saving in the synthesized views on average.
Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images
NASA Astrophysics Data System (ADS)
Botta, F.; Mairani, A.; Hobbs, R. F.; Vergara Gil, A.; Pacilio, M.; Parodi, K.; Cremonesi, M.; Coca Pérez, M. A.; Di Dia, A.; Ferrari, M.; Guerriero, F.; Battistoni, G.; Pedroli, G.; Paganelli, G.; Torres Aroche, L. A.; Sgouros, G.
2013-11-01
Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 108 primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3-4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image
Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images*
Botta, F; Mairani, A; Hobbs, R F; Vergara Gil, A; Pacilio, M; Parodi, K; Cremonesi, M; Coca Pérez, M A; Di Dia, A; Ferrari, M; Guerriero, F; Battistoni, G; Pedroli, G; Paganelli, G; Torres Aroche, L A; Sgouros, G
2014-01-01
Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 108 primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3–4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image
NASA Astrophysics Data System (ADS)
Cunningham, G.; Tu, W.; Morley, S.; Chen, Y.; Haidecuk, J.; De Pascuale, S.; Kletzing, C.
2014-12-01
Modeling the variation of the MeV electron phase space density in the inner magnetosphere during active times is sensitive to many parameters, including the initial and time-varying boundary conditions, VLF wave spectral properties, plasma density, and magnetic field. Historically, diffusion codes like LANL's DREAM3D have relied on the statistically-derived dependence of these parameters on geomagnetic indices, e.g. the wave intensity as a function of the AE index. However, the large number of satellites currently sampling the inner magnetosphere presents modelers with an unparalleled opportunity to create 'event-specific' models for many of these parameters. Toward this goal, we recently showed that using an event-specific model of the chorus wave intensity, built from proxy observations of low-energy electron precipitation observed by POES, along with a low-energy time-varying boundary condition informed by the Van Allen Probes, allows DREAM3D to reproduce the large enhancement of PSD for MeV electrons observed during the October 8-9, 2012, storm. One major limitation of this work is the fact that we used the static Sheeley plasma density model and a dipole magnetic field. Here we will discuss new results that use measurements of the plasma density inferred from the Van Allen Probes' EMFISIS instrument to build an event-specific, global, time-dependent model of the plasma density that we use in DREAM3D in combination with the Tsyganenko 2004 storm-time model of the magnetic field. We show that this combination of plasma density and magnetic field model reproduce the ratio of cyclotron frequency to plasma frequency reported by EMFISIS during the entirety of the October 8-9, 2012, storm at all L-shells of interest, whereas our earlier results did not use the correct ratio at most locations and times. Because this ratio is a key parameter governing the effectiveness of chorus waves in accelerating electrons to higher energy, our new DREAM3D results resolve several
NASA Astrophysics Data System (ADS)
Cunningham, G.; Tu, W.; Chen, Y.; Reeves, G. D.; Henderson, M. G.; Baker, D. N.; Blake, J. B.; Spence, H.
2013-12-01
During the interval October 8-9, 2012, the phase-space density (PSD) of high-energy electrons exhibited a dropout preceding an intense enhancement observed by the MagEIS and REPT instruments aboard the Van Allen Probes. The evolution of the PSD suggests heating by chorus waves, which were observed to have high intensities at the time of the enhancement [1]. Although intense chorus waves were also observed during the first Dst dip on October 8, no PSD enhancement was observed at this time. We demonstrate a quantitative reproduction of the entire event that makes use of three recent modifications to the LANL DREAM3D diffusion code: 1) incorporation of a time-dependent, low-energy, boundary condition from the MagEIS instrument, 2) use of a time-dependent estimate of the chorus wave intensity derived from observations of POES low-energy electron precipitation, and 3) use of an estimate of the last closed drift shell, beyond which electrons are assumed to have a lifetime that is proportional to their drift period around earth. The key features of the event are quantitatively reproduced by the simulation, including the dropout on October 8, and a rapid increase in PSD early on October 9, with a peak near L*=4.2. The DREAM3D code predicts the dropout on October 8 because this feature is dominated by magnetospheric compression and outward radial diffusion-the L* of the last closed drift-shell reaches a minimum value of 5.33 at 1026 UT on October 8. We find that a ';statistical' wave model based on historical CRRES measurements binned in AE* does not reproduce the enhancement because the peak wave amplitudes are only a few 10's of pT, whereas an ';event-specific' model reproduces both the magnitude and timing of the enhancement very well, a s shown in the Figure, because the peak wave amplitudes are 10x higher. [1] 'Electron Acceleration in the Heart of the Van Allen Radiation Belts', G. D. Reeves et al., Science 1237743, Published online 25 July 2013 [DOI:10.1126/science
Not Available
1984-10-01
STEALTH is a family of computer codes that can be used to calculate a variety of physical processes in which the dynamic behavior of a continuum is involved. The version of STEALTH described in this volume is designed for calculations of fluid-structure interaction. This version of the program consists of a hydrodynamic version of STEALTH which has been coupled to a finite-element code, WHAMSE. STEALTH computes the transient response of the fluid continuum, while WHAMSE computes the transient response of shell and beam structures under external fluid loadings. The coupling between STEALTH and WHAMSE is performed during each cycle or step of a calculation. Separate calculations of fluid response and structure response are avoided, thereby giving a more accurate model of the dynamic coupling between fluid and structure. This volume provides the theoretical background, the finite-difference equations, the finite-element equations, a discussion of several sample problems, a listing of the input decks for the sample problems, a programmer's manual and a description of the input records for the STEALTH/WHAMSE computer program.
Qiang, J.; Leitner, D.; Todd, D.S.; Ryne, R.D.
2005-03-15
The superconducting ECR ion source VENUS serves as the prototype injector ion source for the Rare Isotope Accelerator (RIA) driver linac. The RIA driver linac requires a great variety of high charge state ion beams with up to an order of magnitude higher intensity than currently achievable with conventional ECR ion sources. In order to design the beam line optics of the low energy beam line for the RIA front end for the wide parameter range required for the RIA driver accelerator, reliable simulations of the ion beam extraction from the ECR ion source through the ion mass analyzing system are essential. The RIA low energy beam transport line must be able to transport intense beams (up to 10 mA) of light and heavy ions at 30 keV.For this purpose, LBNL is developing the parallel 3D particle-in-cell code IMPACT to simulate the ion beam transport from the ECR extraction aperture through the analyzing section of the low energy transport system. IMPACT, a parallel, particle-in-cell code, is currently used to model the superconducting RF linac section of RIA and is being modified in order to simulate DC beams from the ECR ion source extraction. By using the high performance of parallel supercomputing we will be able to account consistently for the changing space charge in the extraction region and the analyzing section. A progress report and early results in the modeling of the VENUS source will be presented.
NASA Astrophysics Data System (ADS)
Qiang, J.; Leitner, D.; Todd, D. S.; Ryne, R. D.
2005-03-01
The superconducting ECR ion source VENUS serves as the prototype injector ion source for the Rare Isotope Accelerator (RIA) driver linac. The RIA driver linac requires a great variety of high charge state ion beams with up to an order of magnitude higher intensity than currently achievable with conventional ECR ion sources. In order to design the beam line optics of the low energy beam line for the RIA front end for the wide parameter range required for the RIA driver accelerator, reliable simulations of the ion beam extraction from the ECR ion source through the ion mass analyzing system are essential. The RIA low energy beam transport line must be able to transport intense beams (up to 10 mA) of light and heavy ions at 30 keV. For this purpose, LBNL is developing the parallel 3D particle-in-cell code IMPACT to simulate the ion beam transport from the ECR extraction aperture through the analyzing section of the low energy transport system. IMPACT, a parallel, particle-in-cell code, is currently used to model the superconducting RF linac section of RIA and is being modified in order to simulate DC beams from the ECR ion source extraction. By using the high performance of parallel supercomputing we will be able to account consistently for the changing space charge in the extraction region and the analyzing section. A progress report and early results in the modeling of the VENUS source will be presented.
A Novel Macroblock Level Rate Control Method for Stereo Video Coding
Zhu, Gaofeng; Jiang, Gangyi; Peng, Zongju; Shao, Feng; Chen, Fen; Ho, Yo-Sung
2014-01-01
To compress stereo video effectively, this paper proposes a novel macroblock (MB) level rate control method based on binocular perception. A binocular just-notification difference (BJND) model based on the parallax matching is first used to describe binocular perception. Then, the proposed rate control method is performed in stereo video coding with four levels, namely, view level, group-of-pictures (GOP) level, frame level, and MB level. In the view level, different proportions of bitrates are allocated for the left and right views of stereo video according to the prestatistical rate allocation proportion. In the GOP level, the total number of bitrates allocated to each GOP is computed and the initial quantization parameter of each GOP is set. In the frame level, the target bits allocated to each frame are computed. In the MB level, visual perception factor, which is measured by the BJND value of MB, is used to adjust the MB level bit allocation, so that the rate control results in line with the human visual characteristics. Experimental results show that the proposed method can control the bitrate more accurately and get better subjective quality of stereo video, compared with other methods. PMID:24737956
3D Visualization of Machine Learning Algorithms with Astronomical Data
NASA Astrophysics Data System (ADS)
Kent, Brian R.
2016-01-01
We present innovative machine learning (ML) methods using unsupervised clustering with minimum spanning trees (MSTs) to study 3D astronomical catalogs. Utilizing Python code to build trees based on galaxy catalogs, we can render the results with the visualization suite Blender to produce interactive 360 degree panoramic videos. The catalogs and their ML results can be explored in a 3D space using mobile devices, tablets or desktop browsers. We compare the statistics of the MST results to a number of machine learning methods relating to optimization and efficiency.
NASA Astrophysics Data System (ADS)
Wan, Jun; Ruan, Qiuqi; Li, Wei; An, Gaoyun; Zhao, Ruizhen
2014-03-01
Human activity recognition based on RGB-D data has received more attention in recent years. We propose a spatiotemporal feature named three-dimensional (3D) sparse motion scale-invariant feature transform (SIFT) from RGB-D data for activity recognition. First, we build pyramids as scale space for each RGB and depth frame, and then use Shi-Tomasi corner detector and sparse optical flow to quickly detect and track robust keypoints around the motion pattern in the scale space. Subsequently, local patches around keypoints, which are extracted from RGB-D data, are used to build 3D gradient and motion spaces. Then SIFT-like descriptors are calculated on both 3D spaces, respectively. The proposed feature is invariant to scale, transition, and partial occlusions. More importantly, the running time of the proposed feature is fast so that it is well-suited for real-time applications. We have evaluated the proposed feature under a bag of words model on three public RGB-D datasets: one-shot learning Chalearn Gesture Dataset, Cornell Activity Dataset-60, and MSR Daily Activity 3D dataset. Experimental results show that the proposed feature outperforms other spatiotemporal features and are comparative to other state-of-the-art approaches, even though there is only one training sample for each class.
NASA Astrophysics Data System (ADS)
Picot-Colbeaux, Géraldine; Devau, Nicolas; Thiéry, Dominique; Pettenati, Marie; Surdyk, Nicolas; Parmentier, Marc; Amraoui, Nadia; Crastes de Paulet, François; André, Laurent
2016-04-01
Chalk aquifer is the main water resource for domestic water supply in many parts in northern France. In same basin, groundwater is frequently affected by quality problems concerning nitrates. Often close to or above the drinking water standards, nitrate concentration in groundwater is mainly due to historical agriculture practices, combined with leakage and aquifer recharge through the vadose zone. The complexity of processes occurring into such an environment leads to take into account a lot of knowledge on agronomy, geochemistry and hydrogeology in order to understand, model and predict the spatiotemporal evolution of nitrate content and provide a decision support tool for the water producers and stakeholders. To succeed in this challenge, conceptual and numerical models representing accurately the Chalk aquifer specificity need to be developed. A multidisciplinary approach is developed to simulate storage and transport from the ground surface until groundwater. This involves a new agronomic module "NITRATE" (NItrogen TRansfer for Arable soil to groundwaTEr), a soil-crop model allowing to calculate nitrogen mass balance in arable soil, and the "PHREEQC" numerical code for geochemical calculations, both coupled with the 3D transient groundwater numerical code "MARTHE". Otherwise, new development achieved on MARTHE code allows the use of dual porosity and permeability calculations needed in the fissured Chalk aquifer context. This method concerning the integration of existing multi-disciplinary tools is a real challenge to reduce the number of parameters by selecting the relevant equations and simplifying the equations without altering the signal. The robustness and the validity of these numerical developments are tested step by step with several simulations constrained by climate forcing, land use and nitrogen inputs over several decades. In the first time, simulations are performed in a 1D vertical unsaturated soil column for representing experimental nitrates
NASA Astrophysics Data System (ADS)
Coudoux, Francois-Xavier; Gazalet, Marc G.; Derviaux, Christian; Corlay, Patrick
2001-04-01
In this paper, we present a perceptual measures that predicts the visibility of the well-known blocking effect in discrete cosine transform coded image sequences. The main objective of this work is to use the results of the measure derived for adaptive video postprocessing, in order to significantly improve the visual quality of the video decoded sequences at the receiver. The proposed measure is based on a visual model accounting for both the spatial and temporal properties of the human visual system. The input of the visual model is the distorted sequence only. Psycho- visual experiments have been carried out to determine the eye sensitivity to blocking artifacts, by varying a number of visually significant parameters: background level, spatial, and temporal activities in the surrounding image. Results obtained for the measurement of the viability thresholds enable us to estimate the model parameters. The visual model is finally applied to real coded video sequences. The comparison of measurement results with subjective tests shows that proposed perceptual measure has a good correlation with subjective evaluation.
Protection of HEVC Video Delivery in Vehicular Networks with RaptorQ Codes
Martínez-Rach, Miguel; López, Otoniel; Malumbres, Manuel Pérez
2014-01-01
With future vehicles equipped with processing capability, storage, and communications, vehicular networks will become a reality. A vast number of applications will arise that will make use of this connectivity. Some of them will be based on video streaming. In this paper we focus on HEVC video coding standard streaming in vehicular networks and how it deals with packet losses with the aid of RaptorQ, a Forward Error Correction scheme. As vehicular networks are packet loss prone networks, protection mechanisms are necessary if we want to guarantee a minimum level of quality of experience to the final user. We have run simulations to evaluate which configurations fit better in this type of scenarios. PMID:25136675
A modified prediction scheme of the H.264 multiview video coding to improve the decoder performance
NASA Astrophysics Data System (ADS)
Hamadan, Ayman M.; Aly, Hussein A.; Fouad, Mohamed M.; Dansereau, Richard M.
2013-02-01
In this paper, we present a modified inter-view prediction scheme for the multiview video coding (MVC).With more inter-view prediction, the number of reference frames required to decode a single view increase. Consequently, the data size of decoding a single view increases, thus impacting the decoder performance. In this paper, we propose an MVC scheme that requires less inter-view prediction than that of the MVC standard scheme. The proposed scheme is implemented and tested on real multiview video sequences. Improvements are shown using the proposed scheme in terms of average data size required either to decode a single view, or to access any frame (i.e., random access), with comparable rate-distortion. It is compared to the MVC standard scheme and another improved techniques from the literature.
Low bit rate video coding using robust motion vector regeneration in the decoder.
Banham, M R; Brailean, J C; Chan, C L; Katsaggelos, A K
1994-01-01
In this paper, we present a novel coding technique that makes use of the nonstationary characteristics of an image sequence displacement field to estimate and encode motion information. We utilize an MPEG style codec in which the anchor frames in a sequence are encoded with a hybrid approach using quadtree, DCT, and wavelet-based coding techniques. A quadtree structured approach is also utilized for the interframe information. The main objective of the overall design is to demonstrate the coding potential of a newly developed motion estimator called the coupled linearized MAP (CLMAP) estimator. This estimator can be used as a means for producing motion vectors that may be regenerated at the decoder with a coarsely quantized error term created in the encoder. The motion estimator generates highly accurate motion estimates from this coarsely quantized data. This permits the elimination of a separately coded displaced frame difference (DFD) and coded motion vectors. For low bit rate applications, this is especially important because the overhead associated with the transmission of motion vectors may become prohibitive. We exploit both the advantages of the nonstationary motion estimator and the effective compression of the anchor frame coder to improve the visual quality of reconstructed QCIF format color image sequences at low bit rates. Comparisons are made with other video coding methods, including the H.261 and MPEG standards and a pel-recursive-based codec. PMID:18291958
MT3D was first developed by Chunmiao Zheng in 1990 at S.S. Papadopulos & Associates, Inc. with partial support from the U.S. Environmental Protection Agency (USEPA). Starting in 1990, MT3D was released as a pubic domain code from the USEPA. Commercial versions with enhanced capab...
Robust pedestrian tracking and recognition from FLIR video: a unified approach via sparse coding.
Li, Xin; Guo, Rui; Chen, Chao
2014-01-01
Sparse coding is an emerging method that has been successfully applied to both robust object tracking and recognition in the vision literature. In this paper, we propose to explore a sparse coding-based approach toward joint object tracking-and-recognition and explore its potential in the analysis of forward-looking infrared (FLIR) video to support nighttime machine vision systems. A key technical contribution of this work is to unify existing sparse coding-based approaches toward tracking and recognition under the same framework, so that they can benefit from each other in a closed-loop. On the one hand, tracking the same object through temporal frames allows us to achieve improved recognition performance through dynamical updating of template/dictionary and combining multiple recognition results; on the other hand, the recognition of individual objects facilitates the tracking of multiple objects (i.e., walking pedestrians), especially in the presence of occlusion within a crowded environment. We report experimental results on both the CASIAPedestrian Database and our own collected FLIR video database to demonstrate the effectiveness of the proposed joint tracking-and-recognition approach. PMID:24961216
NASA Astrophysics Data System (ADS)
Draper, Martin; Usera, Gabriel
2015-04-01
The Scale Dependent Dynamic Model (SDDM) has been widely validated in large-eddy simulations using pseudo-spectral codes [1][2][3]. The scale dependency, particularly the potential law, has been proved also in a priori studies [4][5]. To the authors' knowledge there have been only few attempts to use the SDDM in finite difference (FD) and finite volume (FV) codes [6][7], finding some improvements with the dynamic procedures (scale independent or scale dependent approach), but not showing the behavior of the scale-dependence parameter when using the SDDM. The aim of the present paper is to evaluate the SDDM in the open source code caffa3d.MBRi, an updated version of the code presented in [8]. caffa3d.MBRi is a FV code, second-order accurate, parallelized with MPI, in which the domain is divided in unstructured blocks of structured grids. To accomplish this, 2 cases are considered: flow between flat plates and flow over a rough surface with the presence of a model wind turbine, taking for this case the experimental data presented in [9]. In both cases the standard Smagorinsky Model (SM), the Scale Independent Dynamic Model (SIDM) and the SDDM are tested. As presented in [6][7] slight improvements are obtained with the SDDM. Nevertheless, the behavior of the scale-dependence parameter supports the generalization of the dynamic procedure proposed in the SDDM, particularly taking into account that no explicit filter is used (the implicit filter is unknown). [1] F. Porté-Agel, C. Meneveau, M.B. Parlange. "A scale-dependent dynamic model for large-eddy simulation: application to a neutral atmospheric boundary layer". Journal of Fluid Mechanics, 2000, 415, 261-284. [2] E. Bou-Zeid, C. Meneveau, M. Parlante. "A scale-dependent Lagrangian dynamic model for large eddy simulation of complex turbulent flows". Physics of Fluids, 2005, 17, 025105 (18p). [3] R. Stoll, F. Porté-Agel. "Dynamic subgrid-scale models for momentum and scalar fluxes in large-eddy simulations of
NASA Astrophysics Data System (ADS)
Furukawa, Hidemitsu; Gong, Jin; Makino, Masato; Kabir, Md. Hasnat
2014-04-01
Recently we successfully developed novel transparent shape memory gels. The SMG memorize their original shapes during the gelation process. In the room temperature, the SMG are elastic and show plasticity (yielding) under deformation. However when heated above about 50˚C, the SMG induce hard-to-soft transition and go back to their original shapes automatically. We focus on new soft and wet systems made of the SMG by 3-D printing technology.
Inter-bit prediction based on maximum likelihood estimate for distributed video coding
NASA Astrophysics Data System (ADS)
Klepko, Robert; Wang, Demin; Huchet, Grégory
2010-01-01
Distributed Video Coding (DVC) is an emerging video coding paradigm for the systems that require low complexity encoders supported by high complexity decoders. A typical real world application for a DVC system is mobile phones with video capture hardware that have a limited encoding capability supported by base-stations with a high decoding capability. Generally speaking, a DVC system operates by dividing a source image sequence into two streams, key frames and Wyner-Ziv (W) frames, with the key frames being used to represent the source plus an approximation to the W frames called S frames (where S stands for side information), while the W frames are used to correct the bit errors in the S frames. This paper presents an effective algorithm to reduce the bit errors in the side information of a DVC system. The algorithm is based on the maximum likelihood estimation to help predict future bits to be decoded. The reduction in bit errors in turn reduces the number of parity bits needed for error correction. Thus, a higher coding efficiency is achieved since fewer parity bits need to be transmitted from the encoder to the decoder. The algorithm is called inter-bit prediction because it predicts the bit-plane to be decoded from previously decoded bit-planes, one bitplane at a time, starting from the most significant bit-plane. Results provided from experiments using real-world image sequences show that the inter-bit prediction algorithm does indeed reduce the bit rate by up to 13% for our test sequences. This bit rate reduction corresponds to a PSNR gain of about 1.6 dB for the W frames.
Fast mode decision for multiview video coding based on depth maps
NASA Astrophysics Data System (ADS)
Cernigliaro, Gianluca; Jaureguizar, Fernando; Ortega, Antonio; Cabrera, Julián; García, Narciso
2009-01-01
A new fast mode decision (FMD) algorithm for multi-view video coding (MVC) is presented. One of the multiple views is encoded based on traditional methods, which provides a mode decision (MD) map, while encoding of the other views is based on the analysis of the homogeneity of the depth map. This approach reduces the burden of the rate-distortion (RD) motion analysis based on the availability of a depth map, which is assumed to be provided by the acquisition process. Although there is a slight decrease of performance in rate-distortion terms, there is a significant reduction in computational cost.
NASA Astrophysics Data System (ADS)
Kalvas, T.; Tarvainen, O.; Clark, H.; Brinkley, J.; ńrje, J.
2011-09-01
A three dimensional ion optical code IBSimu is being developed at the University of Jyväskylä. So far the plasma modelling of the code has been restricted to positive ion extraction systems, but now a negative ion plasma extraction model has been added. The plasma model has been successfully validated with simulations of the Spallation Neutron Source (SNS) ion source extraction both in cylindrical symmetry and in full 3D, also modelling electron beam dumping and ion beam tilt. A filament-driven multicusp ion source has been installed at the Texas A&M University Cyclotron Institute for production of H- and D- beams as a part of the facility upgrade. The light ion beams, produced by the ion source, are accelerated with the K150 cyclotron for production and reacceleration of rare isotopes. The extraction system for the ion source was designed with IBSimu. The extraction features a water-cooled puller electrode with a permanent magnet dipole field for dumping the co-extracted electrons and a decelerating Einzel lens for adjusting the beam focusing for further beam transport. The ion source and the puller electrode are tilted at 4 degree angle with respect to the beam line. The extraction system can handle H- and D- beams with final beam energies from 5 keV to 15 keV using the same geometry, only adjusting the electrode voltages. So far, 24 μA of H- and 15 μA of D- have been extracted from the cyclotron.
NASA Astrophysics Data System (ADS)
Otis, Collin; Ferrero, Pietro; Candler, Graham; Givi, Peyman
2013-11-01
The scalar filtered mass density function (SFMDF) methodology is implemented into the computer code US3D. This is an unstructured Eulerian finite volume hydrodynamic solver and has proven very effective for simulation of compressible turbulent flows. The resulting SFMDF-US3D code is employed for large eddy simulation (LES) on unstructured meshes. Simulations are conducted of subsonic and supersonic flows under non-reacting and reacting conditions. The consistency and the accuracy of the simulated results are assessed along with appraisal of the overall performance of the methodology. The SFMDF-US3D is now capable of simulating high speed flows in complex configurations.
NASA Astrophysics Data System (ADS)
Audigane, Pascal; Chiaberge, Christophe; Mathurin, Frédéric; Lions, Julie; Picot-Colbeaux, Géraldine
2011-04-01
This paper is addressed to the TOUGH2 user community. It presents a new tool for handling simulations run with the TOUGH2 code with specific application to CO 2 geological storage. This tool is composed of separate FORTRAN subroutines (or modules) that can be run independently, using input and output files in ASCII format for TOUGH2. These modules have been developed specifically for modeling of carbon dioxide geological storage and their use with TOUGH2 and the Equation of State module ECO2N, dedicated to CO 2-water-salt mixture systems, with TOUGHREACT, which is an adaptation of TOUGH2 with ECO2N and geochemical fluid-rock interactions, and with TOUGH2 and the EOS7C module dedicated to CO 2-CH 4 gas mixture is described. The objective is to save time for the pre-processing, execution and visualization of complex geometry for geological system representation. The workflow is rapid and user-friendly and future implementation to other TOUGH2 EOS modules for other contexts (e.g. nuclear waste disposal, geothermal production) is straightforward. Three examples are shown for validation: (i) leakage of CO 2 up through an abandoned well; (ii) 3D reactive transport modeling of CO 2 in a sandy aquifer formation in the Sleipner gas Field, (North Sea, Norway); and (iii) an estimation of enhanced gas recovery technology using CO 2 as the injected and stored gas to produce methane in the K12B Gas Field (North Sea, Denmark).
NASA Astrophysics Data System (ADS)
Pei, Yong; Modestino, James W.
2004-12-01
Digital video delivered over wired-to-wireless networks is expected to suffer quality degradation from both packet loss and bit errors in the payload. In this paper, the quality degradation due to packet loss and bit errors in the payload are quantitatively evaluated and their effects are assessed. We propose the use of a concatenated forward error correction (FEC) coding scheme employing Reed-Solomon (RS) codes and rate-compatible punctured convolutional (RCPC) codes to protect the video data from packet loss and bit errors, respectively. Furthermore, the performance of a joint source-channel coding (JSCC) approach employing this concatenated FEC coding scheme for video transmission is studied. Finally, we describe an improved end-to-end architecture using an edge proxy in a mobile support station to implement differential error protection for the corresponding channel impairments expected on the two networks. Results indicate that with an appropriate JSCC approach and the use of an edge proxy, FEC-based error-control techniques together with passive error-recovery techniques can significantly improve the effective video throughput and lead to acceptable video delivery quality over time-varying heterogeneous wired-to-wireless IP networks.
H.264/AVC intra-only coding (iAVC) techniques for video over wireless networks
NASA Astrophysics Data System (ADS)
Yang, Ming; Trifas, Monica; Xiong, Guolun; Rogers, Joshua
2009-02-01
The requirement to transmit video data over unreliable wireless networks (with the possibility of packet loss) is anticipated in the foreseeable future. Significant compression ratio and error resilience are both needed for complex applications including tele-operated robotics, vehicle-mounted cameras, sensor network, etc. Block-matching based inter-frame coding techniques, including MPEG-4 and H.264/AVC, do not perform well in these scenarios due to error propagation between frames. Many wireless applications often use intra-only coding technologies such as Motion-JPEG, which exhibit better recovery from network data loss at the price of higher data rates. In order to address these research issues, an intra-only coding scheme of H.264/AVC (iAVC) is proposed. In this approach, each frame is coded independently as an I-frame. Frame copy is applied to compensate for packet loss. This approach is a good balance between compression performance and error resilience. It achieves compression performance comparable to Motion- JPEG2000 (MJ2), with lower complexity. Error resilience similar to Motion-JPEG (MJ) will also be accomplished. Since the intra-frame prediction with iAVC is strictly confined within the range of a slice, memory usage is also extremely low. Low computational complexity and memory usage are very crucial to mobile stations and devices in wireless network.
YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters
NASA Astrophysics Data System (ADS)
Schild, Jonas; Seele, Sven; Masuch, Maic
2012-03-01
Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.
Lee, Dongyul; Lee, Chaewoo
2014-01-01
The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862
Lee, Chaewoo
2014-01-01
The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862
Landes, Constantin A; Weichert, Frank; Geis, Philipp; Wernstedt, Katrin; Wilde, Anja; Fritsch, Helga; Wagner, Mathias
2005-01-01
This study analyses tissue-plastinated vs. celloidin-embedded large serial sections, their inherent artefacts and aptitude with common video, analog or digital photographic on-screen reproduction. Subsequent virtual 3D microanatomical reconstruction will increase our knowledge of normal and pathological microanatomy for cleft-lip-palate (clp) reconstructive surgery. Of 18 fetal (six clp, 12 control) specimens, six randomized specimens (two clp) were BiodurE12-plastinated, sawn, burnished 90 µm thick transversely (five) or frontally (one), stained with azureII/methylene blue, and counterstained with basic-fuchsin (TP-AMF). Twelve remaining specimens (four clp) were celloidin-embedded, microtome-sectioned 75 µm thick transversely (ten) or frontally (two), and stained with haematoxylin–eosin (CE-HE). Computed-planimetry gauged artefacts, structure differentiation was compared with light microscopy on video, analog and digital photography. Total artefact was 0.9% (TP-AMF) and 2.1% (CE-HE); TP-AMF showed higher colour contrast, gamut and luminance, and CE-HE more red contrast, saturation and hue (P < 0.4). All (100%) structures of interest were light microscopically discerned, 83% on video, 76% on analog photography and 98% in digital photography. Computed image analysis assessed the greatest colour contrast, gamut, luminance and saturation on video; the most detailed, colour-balanced and sharpest images were obatined with digital photography (P < 0.02). TP-AMF retained spatial oversight, covered the entire area of interest and should be combined in different specimens with CE-HE which enables more refined muscle fibre reproduction. Digital photography is preferred for on-screen analysis. PMID:16050904
Low-cost multi-hypothesis motion compensation for video coding
NASA Astrophysics Data System (ADS)
Chen, Lei; Dong, Shengfu; Wang, Ronggang; Wang, Zhenyu; Ma, Siwei; Wang, Wenmin; Gao, Wen
2014-02-01
In conventional motion compensation, prediction block is related only with one motion vector for P frame. Multihypothesis motion compensation (MHMC) is proposed to improve the prediction performance of conventional motion compensation. However, multiple motion vectors have to be searched and coded for MHMC. In this paper, we propose a new low-cost multi-hypothesis motion compensation (LMHMC) scheme. In LMHMC, a block can be predicted from multiple-hypothesis with only one motion vector to be searched and coded into bit-stream, other motion vectors are predicted from motion vectors of neighboring blocks, and so both the encoding complexity and bit-rate of MHMC can be saved by our proposed LMHMC. By adding LMHMC as an additional mode in MPEG internet video coding (IVC) platform, the B-D rate saving is up to 10%, and the average B-D rate saving is close to 5% in Low Delay configure. We also compare the performance between MHMC and LMHMC in IVC platform, the performance of MHMC is improved about 2% on average by LMHMC.
Smoothed reference inter-layer texture prediction for bit depth scalable video coding
NASA Astrophysics Data System (ADS)
Ma, Zhan; Luo, Jiancong; Yin, Peng; Gomila, Cristina; Wang, Yao
2010-01-01
We present a smoothed reference inter-layer texture prediction mode for bit depth scalability based on the Scalable Video Coding extension of the H.264/MPEG-4 AVC standard. In our approach, the base layer encodes an 8-bit signal that can be decoded by any existing H.264/MPEG-4 AVC decoder and the enhancement layer encodes a higher bit depth signal (e.g. 10/12-bit) which requires a bit depth scalable decoder. The approach presented uses base layer motion vectors to conduct motion compensation upon enhancement layer reference frames. Then, the motion compensated block is tone mapped and summed with the co-located base layer residue block prior to being inverse tone mapped to obtain a smoothed reference predictor. In addition to the original inter-/intra-layer prediction modes, the smoothed reference prediction mode enables inter-layer texture prediction for blocks with inter-coded co-located block. The proposed method is designed to improve the coding efficiency for sequences with non-linear tone mapping, in which case we have gains up to 0.4dB over the CGS-based BDS framework.
Introduction to study and simulation of low rate video coding schemes
NASA Technical Reports Server (NTRS)
1992-01-01
During this period, the development of simulators for the various HDTV systems proposed to the FCC were developed. These simulators will be tested using test sequences from the MPEG committee. The results will be extrapolated to HDTV video sequences. Currently, the simulator for the compression aspects of the Advanced Digital Television (ADTV) was completed. Other HDTV proposals are at various stages of development. A brief overview of the ADTV system is given. Some coding results obtained using the simulator are discussed. These results are compared to those obtained using the CCITT H.261 standard. These results in the context of the CCSDS specifications are evaluated and some suggestions as to how the ADTV system could be implemented in the NASA network are made.
NASA Astrophysics Data System (ADS)
Heindel, Andreas; Wige, Eugen; Kaup, André
2014-09-01
Lossless image and video compression is required in many professional applications. However, lossless coding results in a high data rate, which leads to a long wait for the user when the channel capacity is limited. To overcome this problem, scalable lossless coding is an elegant solution. It provides a fast accessible preview by a lossy compressed base layer, which can be refined to a lossless output when the enhancement layer is received. Therefore, this paper presents a lossy to lossless scalable coding system where the enhancement layer is coded by means of intra prediction and entropy coding. Several algorithms are evaluated for the prediction step in this paper. It turned out that Sample-based Weighted Prediction is a reasonable choice for usual consumer video sequences and the Median Edge Detection algorithm is better suited for medical content from computed tomography. For both types of sequences the efficiency may be further improved by the much more complex Edge-Directed Prediction algorithm. In the best case, in total only about 2.7% additional data rate has to be invested for scalable coding compared to single-layer JPEG-LS compression for usual consumer video sequences. For the case of the medical sequences scalable coding is even more efficient than JPEG-LS compression for certain values of QP.
NASA Astrophysics Data System (ADS)
El-Shafai, Walid
2015-09-01
3D multi-view video (MVV) is multiple video streams shot by several cameras around a single scene simultaneously. Therefore it is an urgent task to achieve high 3D MVV compression to meet future bandwidth constraints while maintaining a high reception quality. 3D MVV coded bit-streams that are transmitted over wireless network can suffer from error propagation in the space, time and view domains. Error concealment (EC) algorithms have the advantage of improving the received 3D video quality without any modifications in the transmission rate or in the encoder hardware or software. To improve the quality of reconstructed 3D MVV, we propose an efficient adaptive EC algorithm with multi-hypothesis modes to conceal the erroneous Macro-Blocks (MBs) of intra-coded and inter-coded frames by exploiting the spatial, temporal and inter-view correlations between frames and views. Our proposed algorithm adapts to 3D MVV motion features and to the error locations. The lost MBs are optimally recovered by utilizing motion and disparity matching between frames and views on pixel-by-pixel matching basis. Our simulation results show that the proposed adaptive multi-hypothesis EC algorithm can significantly improv